1
|
Sueoka Y, Paunov A, Tanner A, Blank IA, Ivanova A, Fedorenko E. The Language Network Reliably "Tracks" Naturalistic Meaningful Nonverbal Stimuli. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:385-408. [PMID: 38911462 PMCID: PMC11192443 DOI: 10.1162/nol_a_00135] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Accepted: 01/08/2024] [Indexed: 06/25/2024]
Abstract
The language network, comprised of brain regions in the left frontal and temporal cortex, responds robustly and reliably during language comprehension but shows little or no response during many nonlinguistic cognitive tasks (e.g., Fedorenko & Blank, 2020). However, one domain whose relationship with language remains debated is semantics-our conceptual knowledge of the world. Given that the language network responds strongly to meaningful linguistic stimuli, could some of this response be driven by the presence of rich conceptual representations encoded in linguistic inputs? In this study, we used a naturalistic cognition paradigm to test whether the cognitive and neural resources that are responsible for language processing are also recruited for processing semantically rich nonverbal stimuli. To do so, we measured BOLD responses to a set of ∼5-minute-long video and audio clips that consisted of meaningful event sequences but did not contain any linguistic content. We then used the intersubject correlation (ISC) approach (Hasson et al., 2004) to examine the extent to which the language network "tracks" these stimuli, that is, exhibits stimulus-related variation. Across all the regions of the language network, meaningful nonverbal stimuli elicited reliable ISCs. These ISCs were higher than the ISCs elicited by semantically impoverished nonverbal stimuli (e.g., a music clip), but substantially lower than the ISCs elicited by linguistic stimuli. Our results complement earlier findings from controlled experiments (e.g., Ivanova et al., 2021) in providing further evidence that the language network shows some sensitivity to semantic content in nonverbal stimuli.
Collapse
Affiliation(s)
- Yotaro Sueoka
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, USA
| | - Alexander Paunov
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Cognitive Neuroimaging Unit, INSERM, CEA, CNRS, Université Paris-Saclay, NeuroSpin center, Gif/Yvette, France
| | - Alyx Tanner
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
| | - Idan A. Blank
- Department of Psychology and Linguistics, University of California Los Angeles, Los Angeles, CA, USA
| | - Anna Ivanova
- School of Psychology, Georgia Institute of Technology, Atlanta, GA, USA
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, Massachusetts Instititute of Technology, Cambridge, MA, USA
- McGovern Institute for Brain Research, Massachusetts Instititute of Technology, Cambridge, MA, USA
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, MA, USA
| |
Collapse
|
2
|
Hinzen W, Palaniyappan L. The 'L-factor': Language as a transdiagnostic dimension in psychopathology. Prog Neuropsychopharmacol Biol Psychiatry 2024; 131:110952. [PMID: 38280712 DOI: 10.1016/j.pnpbp.2024.110952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Revised: 12/20/2023] [Accepted: 01/23/2024] [Indexed: 01/29/2024]
Abstract
Thoughts and moods constituting our mental life incessantly change. When the steady flow of this dynamics diverges in clinical directions, the possible pathways involved are captured through discrete diagnostic labels. Yet a single vulnerable neurocognitive system may be causally involved in psychopathological deviations transdiagnostically. We argue that language viewed as integrating cortical functions is the best current candidate, whose forms of breakdown along its different dimensions are then manifest as symptoms - from prosodic abnormalities and rumination in depression to distortions of speech perception in verbal hallucinations, distortions of meaning and content in delusions, or disorganized speech in formal thought disorder. Spontaneous connected speech provides continuous objective readouts generating a highly accessible bio-behavioral marker with the potential of revolutionizing neuropsychological measurement. This argument turns language into a transdiagnostic 'L-factor' providing an analytical and mechanistic substrate for previously proposed latent general factors of psychopathology ('p-factor') and cognitive functioning ('c-factor'). Together with immense practical opportunities afforded by rapidly advancing natural language processing (NLP) technologies and abundantly available data, this suggests a new era of translational clinical psychiatry, in which both psychopathology and language may be rethought together.
Collapse
Affiliation(s)
- Wolfram Hinzen
- Department of Translation & Language Sciences, Universitat Pompeu Fabra, Barcelona, Spain; Institut Català de Recerca i Estudis Avançats (ICREA), Barcelona, Spain.
| | - Lena Palaniyappan
- Douglas Mental Health University Institute, Department of Psychiatry, McGill University, Montreal H4H1R3, Quebec, Canada; Robarts Research Institute & Lawson Health Research Institute, London, ON, Canada
| |
Collapse
|
3
|
Ren Y, Brown TI. Beyond the ears: A review exploring the interconnected brain behind the hierarchical memory of music. Psychon Bull Rev 2024; 31:507-530. [PMID: 37723336 DOI: 10.3758/s13423-023-02376-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/22/2023] [Indexed: 09/20/2023]
Abstract
Music is a ubiquitous element of daily life. Understanding how music memory is represented and expressed in the brain is key to understanding how music can influence human daily cognitive tasks. Current music-memory literature is built on data from very heterogeneous tasks for measuring memory, and the neural correlates appear to differ depending on different forms of memory function targeted. Such heterogeneity leaves many exceptions and conflicts in the data underexplained (e.g., hippocampal involvement in music memory is debated). This review provides an overview of existing neuroimaging results from music-memory related studies and concludes that although music is a special class of event in our lives, the memory systems behind it do in fact share neural mechanisms with memories from other modalities. We suggest that dividing music memory into different levels of a hierarchy (structural level and semantic level) helps understand overlap and divergence in neural networks involved. This is grounded in the fact that memorizing a piece of music recruits brain clusters that separately support functions including-but not limited to-syntax storage and retrieval, temporal processing, prediction versus reality comparison, stimulus feature integration, personal memory associations, and emotion perception. The cross-talk between frontal-parietal music structural processing centers and the subcortical emotion and context encoding areas explains why music is not only so easily memorable but can also serve as strong contextual information for encoding and retrieving nonmusic information in our lives.
Collapse
Affiliation(s)
- Yiren Ren
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA.
| | - Thackery I Brown
- Georgia Institute of Technology, College of Science, School of Psychology, Atlanta, GA, USA
| |
Collapse
|
4
|
Saccheri P, Travan L, Crivellato E. The Cerebral Cortex and the Songs of Homer: When Neuroscience Meets History and Literature. Neuroscientist 2024; 30:17-22. [PMID: 35833466 DOI: 10.1177/10738584221102862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
In this article we reconsider Homer's poetry in the light of modern achievements in neuroscience. This perspective offers some clues for examining specific patterns of brain functioning. Homer's epics, for instance, painted a synthetic picture of the human body, emphasizing some parts and neglecting others. This led to the formation of a body schema reminiscent of a homunculus, which we call the "Homeric homunculus." Both poems were largely the product of centuries of oral tradition, in which the prodigious memory of courtly rhapsodists was essential to the performance of the epics. The underlying cognitive functions required a close interplay of memory and language skills, supported by the musical and rhythmic cadence of Homeric verse.
Collapse
Affiliation(s)
- Paola Saccheri
- Section of Anatomy, Neuroanatomy and History of Medicine, Department of Medicine, University of Udine, Udine, Italy
| | - Luciana Travan
- Section of Anatomy, Neuroanatomy and History of Medicine, Department of Medicine, University of Udine, Udine, Italy
| | - Enrico Crivellato
- Section of Anatomy, Neuroanatomy and History of Medicine, Department of Medicine, University of Udine, Udine, Italy
| |
Collapse
|
5
|
Shan T, Cappelloni MS, Maddox RK. Subcortical responses to music and speech are alike while cortical responses diverge. Sci Rep 2024; 14:789. [PMID: 38191488 PMCID: PMC10774448 DOI: 10.1038/s41598-023-50438-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 12/20/2023] [Indexed: 01/10/2024] Open
Abstract
Music and speech are encountered daily and are unique to human beings. Both are transformed by the auditory pathway from an initial acoustical encoding to higher level cognition. Studies of cortex have revealed distinct brain responses to music and speech, but differences may emerge in the cortex or may be inherited from different subcortical encoding. In the first part of this study, we derived the human auditory brainstem response (ABR), a measure of subcortical encoding, to recorded music and speech using two analysis methods. The first method, described previously and acoustically based, yielded very different ABRs between the two sound classes. The second method, however, developed here and based on a physiological model of the auditory periphery, gave highly correlated responses to music and speech. We determined the superiority of the second method through several metrics, suggesting there is no appreciable impact of stimulus class (i.e., music vs speech) on the way stimulus acoustics are encoded subcortically. In this study's second part, we considered the cortex. Our new analysis method resulted in cortical music and speech responses becoming more similar but with remaining differences. The subcortical and cortical results taken together suggest that there is evidence for stimulus-class dependent processing of music and speech at the cortical but not subcortical level.
Collapse
Affiliation(s)
- Tong Shan
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Madeline S Cappelloni
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA
- Center for Visual Science, University of Rochester, Rochester, NY, USA
| | - Ross K Maddox
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, USA.
- Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.
- Center for Visual Science, University of Rochester, Rochester, NY, USA.
- Department of Neuroscience, University of Rochester, Rochester, NY, USA.
| |
Collapse
|
6
|
Al Roumi F, Planton S, Wang L, Dehaene S. Brain-imaging evidence for compression of binary sound sequences in human memory. eLife 2023; 12:e84376. [PMID: 37910588 PMCID: PMC10619979 DOI: 10.7554/elife.84376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Accepted: 10/14/2023] [Indexed: 11/03/2023] Open
Abstract
According to the language-of-thought hypothesis, regular sequences are compressed in human memory using recursive loops akin to a mental program that predicts future items. We tested this theory by probing memory for 16-item sequences made of two sounds. We recorded brain activity with functional MRI and magneto-encephalography (MEG) while participants listened to a hierarchy of sequences of variable complexity, whose minimal description required transition probabilities, chunking, or nested structures. Occasional deviant sounds probed the participants' knowledge of the sequence. We predicted that task difficulty and brain activity would be proportional to the complexity derived from the minimal description length in our formal language. Furthermore, activity should increase with complexity for learned sequences, and decrease with complexity for deviants. These predictions were upheld in both fMRI and MEG, indicating that sequence predictions are highly dependent on sequence structure and become weaker and delayed as complexity increases. The proposed language recruited bilateral superior temporal, precentral, anterior intraparietal, and cerebellar cortices. These regions overlapped extensively with a localizer for mathematical calculation, and much less with spoken or written language processing. We propose that these areas collectively encode regular sequences as repetitions with variations and their recursive composition into nested structures.
Collapse
Affiliation(s)
- Fosca Al Roumi
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Samuel Planton
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
| | - Liping Wang
- Institute of Neuroscience, Key Laboratory of Primate Neurobiology, CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of SciencesShanghaiChina
| | - Stanislas Dehaene
- Cognitive Neuroimaging Unit, Université Paris-Saclay, INSERM, CEA, CNRS, NeuroSpin centerGif/YvetteFrance
- Collège de France, Université Paris Sciences Lettres (PSL)ParisFrance
| |
Collapse
|
7
|
Belden A, Quinci MA, Geddes M, Donovan NJ, Hanser SB, Loui P. Functional Organization of Auditory and Reward Systems in Aging. J Cogn Neurosci 2023; 35:1570-1592. [PMID: 37432735 PMCID: PMC10513766 DOI: 10.1162/jocn_a_02028] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/12/2023]
Abstract
The intrinsic organization of functional brain networks is known to change with age, and is affected by perceptual input and task conditions. Here, we compare functional activity and connectivity during music listening and rest between younger (n = 24) and older (n = 24) adults, using whole-brain regression, seed-based connectivity, and ROI-ROI connectivity analyses. As expected, activity and connectivity of auditory and reward networks scaled with liking during music listening in both groups. Younger adults show higher within-network connectivity of auditory and reward regions as compared with older adults, both at rest and during music listening, but this age-related difference at rest was reduced during music listening, especially in individuals who self-report high musical reward. Furthermore, younger adults showed higher functional connectivity between auditory network and medial prefrontal cortex that was specific to music listening, whereas older adults showed a more globally diffuse pattern of connectivity, including higher connectivity between auditory regions and bilateral lingual and inferior frontal gyri. Finally, connectivity between auditory and reward regions was higher when listening to music selected by the participant. These results highlight the roles of aging and reward sensitivity on auditory and reward networks. Results may inform the design of music-based interventions for older adults and improve our understanding of functional network dynamics of the brain at rest and during a cognitively engaging task.
Collapse
Affiliation(s)
| | | | | | - Nancy J Donovan
- Brigham and Women's Hospital and Harvard Medical School, Boston, MA
| | | | | |
Collapse
|
8
|
McCarty MJ, Murphy E, Scherschligt X, Woolnough O, Morse CW, Snyder K, Mahon BZ, Tandon N. Intraoperative cortical localization of music and language reveals signatures of structural complexity in posterior temporal cortex. iScience 2023; 26:107223. [PMID: 37485361 PMCID: PMC10362292 DOI: 10.1016/j.isci.2023.107223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 06/01/2023] [Accepted: 06/22/2023] [Indexed: 07/25/2023] Open
Abstract
Language and music involve the productive combination of basic units into structures. It remains unclear whether brain regions sensitive to linguistic and musical structure are co-localized. We report an intraoperative awake craniotomy in which a left-hemispheric language-dominant professional musician underwent cortical stimulation mapping (CSM) and electrocorticography of music and language perception and production during repetition tasks. Musical sequences were melodic or amelodic, and differed in algorithmic compressibility (Lempel-Ziv complexity). Auditory recordings of sentences differed in syntactic complexity (single vs. multiple phrasal embeddings). CSM of posterior superior temporal gyrus (pSTG) disrupted music perception and production, along with speech production. pSTG and posterior middle temporal gyrus (pMTG) activated for language and music (broadband gamma; 70-150 Hz). pMTG activity was modulated by musical complexity, while pSTG activity was modulated by syntactic complexity. This points to shared resources for music and language comprehension, but distinct neural signatures for the processing of domain-specific structural features.
Collapse
Affiliation(s)
- Meredith J. McCarty
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Xavier Scherschligt
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Cale W. Morse
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Kathryn Snyder
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Bradford Z. Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX 77030, USA
| |
Collapse
|
9
|
Unger N, Haeck M, Eickhoff SB, Camilleri JA, Dickscheid T, Mohlberg H, Bludau S, Caspers S, Amunts K. Cytoarchitectonic mapping of the human frontal operculum-New correlates for a variety of brain functions. Front Hum Neurosci 2023; 17:1087026. [PMID: 37448625 PMCID: PMC10336231 DOI: 10.3389/fnhum.2023.1087026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 04/18/2023] [Indexed: 07/15/2023] Open
Abstract
The human frontal operculum (FOp) is a brain region that covers parts of the ventral frontal cortex next to the insula. Functional imaging studies showed activations in this region in tasks related to language, somatosensory, and cognitive functions. While the precise cytoarchitectonic areas that correlate to these processes have not yet been revealed, earlier receptorarchitectonic analysis resulted in a detailed parcellation of the FOp. We complemented this analysis by a cytoarchitectonic study of a sample of ten postmortem brains and mapped the posterior FOp in serial, cell-body stained histological sections using image analysis and multivariate statistics. Three new areas were identified: Op5 represents the most posterior area, followed by Op6 and the most anterior region Op7. Areas Op5-Op7 approach the insula, up to the circular sulcus. Area 44 of Broca's region, the most ventral part of premotor area 6, and parts of the parietal operculum are dorso-laterally adjacent to Op5-Op7. The areas did not show any interhemispheric or sex differences. Three-dimensional probability maps and a maximum probability map were generated in stereotaxic space, and then used, in a first proof-of-concept-study, for functional decoding and analysis of structural and functional connectivity. Functional decoding revealed different profiles of cytoarchitectonically identified Op5-Op7. While left Op6 was active in music cognition, right Op5 was involved in chewing/swallowing and sexual processing. Both areas showed activation during the exercise of isometric force in muscles. An involvement in the coordination of flexion/extension could be shown for the right Op6. Meta-analytic connectivity modeling revealed various functional connections of the FOp areas within motor and somatosensory networks, with the most evident connection with the music/language network for Op6 left. The new cytoarchitectonic maps are part of Julich-Brain, and publicly available to serve as a basis for future analyses of structural-functional relationships in this region.
Collapse
Affiliation(s)
- Nina Unger
- Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty and University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | | | - Simon B. Eickhoff
- Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty and University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Julia A. Camilleri
- Institute of Neuroscience and Medicine (INM-7), Research Centre Jülich, Jülich, Germany
- Institute for Systems Neuroscience, Medical Faculty and University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Timo Dickscheid
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Institute of Computer Science, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Hartmut Mohlberg
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Sebastian Bludau
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| | - Svenja Caspers
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
- Institute for Anatomy I, Medical Faculty and University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
| | - Katrin Amunts
- Cécile and Oskar Vogt Institute for Brain Research, Medical Faculty and University Hospital Düsseldorf, Heinrich Heine University Düsseldorf, Düsseldorf, Germany
- Institute of Neuroscience and Medicine (INM-1), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
10
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
11
|
Belden A, Quinci MA, Geddes M, Donovan NJ, Hanser SB, Loui P. Functional Organization of Auditory and Reward Systems in Aging. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.01.522417. [PMID: 36711696 PMCID: PMC9881869 DOI: 10.1101/2023.01.01.522417] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
The intrinsic organization of functional brain networks is known to change with age, and is affected by perceptual input and task conditions. Here, we compare functional activity and connectivity during music listening and rest between younger (N=24) and older (N=24) adults, using whole brain regression, seed-based connectivity, and ROI-ROI connectivity analyses. As expected, activity and connectivity of auditory and reward networks scaled with liking during music listening in both groups. Younger adults show higher within-network connectivity of auditory and reward regions as compared to older adults, both at rest and during music listening, but this age-related difference at rest was reduced during music listening, especially in individuals who self-report high musical reward. Furthermore, younger adults showed higher functional connectivity between auditory network and medial prefrontal cortex (mPFC) that was specific to music listening, whereas older adults showed a more globally diffuse pattern of connectivity, including higher connectivity between auditory regions and bilateral lingual and inferior frontal gyri. Finally, connectivity between auditory and reward regions was higher when listening to music selected by the participant. These results highlight the roles of aging and reward sensitivity on auditory and reward networks. Results may inform the design of music- based interventions for older adults, and improve our understanding of functional network dynamics of the brain at rest and during a cognitively engaging task.
Collapse
|
12
|
Gurariy G, Randall R, Greenberg AS. Neuroimaging evidence for the direct role of auditory scene analysis in object perception. Cereb Cortex 2023; 33:6257-6272. [PMID: 36562994 PMCID: PMC10183742 DOI: 10.1093/cercor/bhac501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 11/29/2022] [Accepted: 11/30/2022] [Indexed: 12/24/2022] Open
Abstract
Auditory Scene Analysis (ASA) refers to the grouping of acoustic signals into auditory objects. Previously, we have shown that perceived musicality of auditory sequences varies with high-level organizational features. Here, we explore the neural mechanisms mediating ASA and auditory object perception. Participants performed musicality judgments on randomly generated pure-tone sequences and manipulated versions of each sequence containing low-level changes (amplitude; timbre). Low-level manipulations affected auditory object perception as evidenced by changes in musicality ratings. fMRI was used to measure neural activation to sequences rated most and least musical, and the altered versions of each sequence. Next, we generated two partially overlapping networks: (i) a music processing network (music localizer) and (ii) an ASA network (base sequences vs. ASA manipulated sequences). Using Representational Similarity Analysis, we correlated the functional profiles of each ROI to a model generated from behavioral musicality ratings as well as models corresponding to low-level feature processing and music perception. Within overlapping regions, areas near primary auditory cortex correlated with low-level ASA models, whereas right IPS was correlated with musicality ratings. Shared neural mechanisms that correlate with behavior and underlie both ASA and music perception suggests that low-level features of auditory stimuli play a role in auditory object perception.
Collapse
Affiliation(s)
- Gennadiy Gurariy
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| | - Richard Randall
- School of Music and Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, United States
| | - Adam S Greenberg
- Department of Biomedical Engineering, Medical College of Wisconsin and Marquette University, 8701 W Watertown Plank Rd, Milwaukee, WI 53233, United States
| |
Collapse
|
13
|
Pino MC, Giancola M, D'Amico S. The Association between Music and Language in Children: A State-of-the-Art Review. CHILDREN (BASEL, SWITZERLAND) 2023; 10:children10050801. [PMID: 37238349 DOI: 10.3390/children10050801] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 04/26/2023] [Accepted: 04/27/2023] [Indexed: 05/28/2023]
Abstract
Music and language are two complex systems that specifically characterize the human communication toolkit. There has been a heated debate in the literature on whether music was an evolutionary precursor to language or a byproduct of cognitive faculties that developed to support language. The present review of existing literature about the relationship between music and language highlights that music plays a critical role in language development in early life. Our findings revealed that musical properties, such as rhythm and melody, could affect language acquisition in semantic processing and grammar, including syntactic aspects and phonological awareness. Overall, the results of the current review shed further light on the complex mechanisms involving the music-language link, highlighting that music plays a central role in the comprehension of language development from the early stages of life.
Collapse
Affiliation(s)
- Maria Chiara Pino
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| | - Marco Giancola
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| | - Simonetta D'Amico
- Department of Biotechnological and Applied Clinical Sciences, University of L'Aquila, 67100 L'Aquila, Italy
| |
Collapse
|
14
|
Jiang L, Zhang R, Tao L, Zhang Y, Zhou Y, Cai Q. Neural mechanisms of musical structure and tonality, and the effect of musicianship. Front Psychol 2023; 14:1092051. [PMID: 36844277 PMCID: PMC9948014 DOI: 10.3389/fpsyg.2023.1092051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 01/16/2023] [Indexed: 02/11/2023] Open
Abstract
Introduction The neural basis for the processing of musical syntax has previously been examined almost exclusively in classical tonal music, which is characterized by a strictly organized hierarchical structure. Musical syntax may differ in different music genres caused by tonality varieties. Methods The present study investigated the neural mechanisms for processing musical syntax across genres varying in tonality - classical, impressionist, and atonal music - and, in addition, examined how musicianship modulates such processing. Results Results showed that, first, the dorsal stream, including the bilateral inferior frontal gyrus and superior temporal gyrus, plays a key role in the perception of tonality. Second, right frontotemporal regions were crucial in allowing musicians to outperform non-musicians in musical syntactic processing; musicians also benefit from a cortical-subcortical network including pallidum and cerebellum, suggesting more auditory-motor interaction in musicians than in non-musicians. Third, left pars triangularis carries out online computations independently of tonality and musicianship, whereas right pars triangularis is sensitive to tonality and partly dependent on musicianship. Finally, unlike tonal music, the processing of atonal music could not be differentiated from that of scrambled notes, both behaviorally and neurally, even among musicians. Discussion The present study highlights the importance of studying varying music genres and experience levels and provides a better understanding of musical syntax and tonality processing and how such processing is modulated by music experience.
Collapse
Affiliation(s)
- Lei Jiang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,School of Music, East China Normal University, Shanghai, China
| | - Ruiqing Zhang
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Lily Tao
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yuxin Zhang
- Shanghai High School International Division, Shanghai, China
| | - Yongdi Zhou
- School of Psychology, Shenzhen University, Shenzhen, China,Krieger Mind/Brain Institute, Johns Hopkins University, Baltimore, MD, United States,Yongdi Zhou, ✉
| | - Qing Cai
- Key Laboratory of Brain Functional Genomics (MOE & STCSM), Affiliated Mental Health Center, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China,Shanghai Changning Mental Health Center, Shanghai, China,NYU-ECNU Institute of Brain and Cognitive Science, New York University Shanghai, Shanghai, China,*Correspondence: Qing Cai, ✉
| |
Collapse
|
15
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
16
|
Scharinger M, Knoop CA, Wagner V, Menninghaus W. Neural processing of poems and songs is based on melodic properties. Neuroimage 2022; 257:119310. [PMID: 35569784 DOI: 10.1016/j.neuroimage.2022.119310] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 04/26/2022] [Accepted: 05/11/2022] [Indexed: 11/30/2022] Open
Abstract
The neural processing of speech and music is still a matter of debate. A long tradition that assumes shared processing capacities for the two domains contrasts with views that assume domain-specific processing. We here contribute to this topic by investigating, in a functional magnetic imaging (fMRI) study, ecologically valid stimuli that are identical in wording and differ only in that one group is typically spoken (or silently read), whereas the other is sung: poems and their respective musical settings. We focus on the melodic properties of spoken poems and their sung musical counterparts by looking at proportions of significant autocorrelations (PSA) based on pitch values extracted from their recordings. Following earlier studies, we assumed a bias of poem-processing towards the left and a bias for song-processing on the right hemisphere. Furthermore, PSA values of poems and songs were expected to explain variance in left- vs. right-temporal brain areas, while continuous liking ratings obtained in the scanner should modulate activity in the reward network. Overall, poem processing compared to song processing relied on left temporal regions, including the superior temporal gyrus, whereas song processing compared to poem processing recruited more right temporal areas, including Heschl's gyrus and the superior temporal gyrus. PSA values co-varied with activation in bilateral temporal regions for poems, and in right-dominant fronto-temporal regions for songs. Continuous liking ratings were correlated with activity in the default mode network for both poems and songs. The pattern of results suggests that the neural processing of poems and their musical settings is based on their melodic properties, supported by bilateral temporal auditory areas and an additional right fronto-temporal network known to be implicated in the processing of melodies in songs. These findings take a middle ground in providing evidence for specific processing circuits for speech and music in the left and right hemisphere, but simultaneously for shared processing of melodic aspects of both poems and their musical settings in the right temporal cortex. Thus, we demonstrate the neurobiological plausibility of assuming the importance of melodic properties in spoken and sung aesthetic language alike, along with the involvement of the default mode network in the aesthetic appreciation of these properties.
Collapse
Affiliation(s)
- Mathias Scharinger
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Research Group Phonetics, Institute of German Linguistics, Philipps-University Marburg, Pilgrimstein 16, Marburg 35032, Germany; Center for Mind, Brain and Behavior, Universities of Marburg and Gießen, Germany.
| | - Christine A Knoop
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Department of Music, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Valentin Wagner
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany; Experimental Psychology Unit, Helmut Schmidt University / University of the Federal Armed Forces Hamburg, Germany
| | - Winfried Menninghaus
- Department of Language and Literature, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| |
Collapse
|
17
|
Asano R, Boeckx C, Fujita K. Moving beyond domain-specific vs. domain-general options in cognitive neuroscience. Cortex 2022; 154:259-268. [DOI: 10.1016/j.cortex.2022.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 04/07/2022] [Accepted: 05/11/2022] [Indexed: 11/26/2022]
|
18
|
Chiappetta B, Patel AD, Thompson CK. Musical and linguistic syntactic processing in agrammatic aphasia: An ERP study. JOURNAL OF NEUROLINGUISTICS 2022; 62:101043. [PMID: 35002061 PMCID: PMC8740885 DOI: 10.1016/j.jneuroling.2021.101043] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Language and music rely on complex sequences organized according to syntactic principles that are implicitly understood by enculturated listeners. Across both domains, syntactic processing involves predicting and integrating incoming elements into higher-order structures. According to the Shared Syntactic Integration Resource Hypothesis (SSIRH; Patel, 2003), musical and linguistic syntactic processing rely on shared resources for integrating incoming elements (e.g., chords, words) into unfolding sequences. One prediction of the SSIRH is that people with agrammatic aphasia (whose deficits are due to syntactic integration problems) should present with deficits in processing musical syntax. We report the first neural study to test this prediction: event-related potentials (ERPs) were measured in response to musical and linguistic syntactic violations in a group of people with agrammatic aphasia (n=7) compared to a group of healthy controls (n=14) using an acceptability judgement task. The groups were matched with respect to age, education, and extent of musical training. Violations were based on morpho-syntactic relations in sentences and harmonic relations in chord sequences. Both groups presented with a significant P600 response to syntactic violations across both domains. The aphasic participants presented with a reduced-amplitude posterior P600 compared to the healthy adults in response to linguistic, but not musical, violations. Participants with aphasia did however present with larger frontal positivities in response to violations in both domains. Intriguingly, extent of musical training was associated with larger posterior P600 responses to syntactic violations of language and music in both groups. Overall, these findings are not consistent with the predictions of the SSIRH, and instead suggest that linguistic, but not musical, syntactic processing may be selectively impaired in stroke-induced agrammatic aphasia. However, the findings also suggest a relationship between musical training and linguistic syntactic processing, which may have clinical implications for people with aphasia, and motivates more research on the relationship between these two domains.
Collapse
Affiliation(s)
- Brianne Chiappetta
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Aniruddh D. Patel
- Department of Psychology, Tufts University, Medford, MA, USA
- Program in Brain, Mind, and Consciousness, Canadian Institute for Advanced Research (CIFAR), Toronto, ON, CA
| | - Cynthia K. Thompson
- Aphasia and Neurolinguistics Research Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Northwestern University, Chicago, IL, USA
- Department of Neurology, Northwestern University, Chicago, IL, USA
| |
Collapse
|
19
|
Sihvonen AJ, Pitkäniemi A, Leo V, Soinila S, Särkämö T. Resting-state language network neuroplasticity in post-stroke music listening: A randomized controlled trial. Eur J Neurosci 2021; 54:7886-7898. [PMID: 34763370 DOI: 10.1111/ejn.15524] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 10/13/2021] [Accepted: 11/08/2021] [Indexed: 01/31/2023]
Abstract
Recent evidence suggests that post-stroke vocal music listening can aid language recovery, but the network-level functional neuroplasticity mechanisms of this effect are unknown. Here, we sought to determine if improved language recovery observed after post-stroke listening to vocal music is driven by changes in longitudinal resting-state functional connectivity within the language network. Using data from a single-blind randomized controlled trial on stroke patients (N = 38), we compared the effects of daily listening to self-selected vocal music, instrumental music and audio books on changes of the resting-state functional connectivity within the language network and their correlation to improved language skills and verbal memory during the first 3 months post-stroke. From acute to 3-month stage, the vocal music and instrumental music groups increased functional connectivity between a cluster comprising the left inferior parietal areas and the language network more than the audio book group. However, the functional connectivity increase correlated with improved verbal memory only in the vocal music group cluster. This study shows that listening to vocal music post-stroke promotes recovery of verbal memory by inducing changes in longitudinal functional connectivity in the language network. Our results conform to the variable neurodisplacement theory underpinning aphasia recovery.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland.,Centre for Clinical Research, The University of Queensland, Brisbane, Queensland, Australia
| | - Anni Pitkäniemi
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Seppo Soinila
- Neurocenter, Turku University Hospital and Division of Clinical Neurosciences, University of Turku, Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
20
|
Abstract
The present study investigates effects of conventionally metered and rhymed poetry on eyemovements
in silent reading. Readers saw MRRL poems (i.e., metrically regular, rhymed
language) in two layouts. In poem layout, verse endings coincided with line breaks. In prose
layout verse endings could be mid-line. We also added metrical and rhyme anomalies. We
hypothesized that silently reading MRRL results in building up auditive expectations that
are based on a rhythmic “audible gestalt” and propose that rhythmicity is generated through
subvocalization. Our results revealed that readers were sensitive to rhythmic-gestalt-anomalies
but showed differential effects in poem and prose layouts. Metrical anomalies in particular
resulted in robust reading disruptions across a variety of eye-movement measures in
the poem layout and caused re-reading of the local context. Rhyme anomalies elicited
stronger effects in prose layout and resulted in systematic re-reading of pre-rhymes. The
presence or absence of rhythmic-gestalt-anomalies, as well as the layout manipulation, also
affected reading in general. Effects of syllable number indicated a high degree of subvocalization.
The overall pattern of results suggests that eye-movements reflect, and are closely
aligned with, the rhythmic subvocalization of MRRL. This study introduces a two-stage approach to the analysis of long MRRL stimuli and contributes
to the discussion of how the processing of rhythm in music and speech may overlap.
Collapse
Affiliation(s)
- Judith Beck
- Cognitive Science, University of Freiburg,, Germany
| | | |
Collapse
|
21
|
White PA. The extended present: an informational context for perception. Acta Psychol (Amst) 2021; 220:103403. [PMID: 34454251 DOI: 10.1016/j.actpsy.2021.103403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2021] [Revised: 08/04/2021] [Accepted: 08/19/2021] [Indexed: 01/29/2023] Open
Abstract
Several previous authors have proposed a kind of specious or subjective present moment that covers a few seconds of recent information. This article proposes a new hypothesis about the subjective present, renamed the extended present, defined not in terms of time covered but as a thematically connected information structure held in working memory and in transiently accessible form in long-term memory. The three key features of the extended present are that information in it is thematically connected, both internally and to current attended perceptual input, it is organised in a hierarchical structure, and all information in it is marked with temporal information, specifically ordinal and duration information. Temporal boundaries to the information structure are determined by hierarchical structure processing and by limits on processing and storage capacity. Supporting evidence for the importance of hierarchical structure analysis is found in the domains of music perception, speech and language processing, perception and production of goal-directed action, and exact arithmetical calculation. Temporal information marking is also discussed and a possible mechanism for representing ordinal and duration information on the time scale of the extended present is proposed. It is hypothesised that the extended present functions primarily as an informational context for making sense of current perceptual input, and as an enabler for perception and generation of complex structures and operations in language, action, music, exact calculation, and other domains.
Collapse
|
22
|
Asano R, Boeckx C, Seifert U. Hierarchical control as a shared neurocognitive mechanism for language and music. Cognition 2021; 216:104847. [PMID: 34311153 DOI: 10.1016/j.cognition.2021.104847] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2020] [Revised: 05/14/2021] [Accepted: 07/11/2021] [Indexed: 12/16/2022]
Abstract
Although comparative research has made substantial progress in clarifying the relationship between language and music as neurocognitive systems from both a theoretical and empirical perspective, there is still no consensus about which mechanisms, if any, are shared and how they bring about different neurocognitive systems. In this paper, we tackle these two questions by focusing on hierarchical control as a neurocognitive mechanism underlying syntax in language and music. We put forward the Coordinated Hierarchical Control (CHC) hypothesis: linguistic and musical syntax rely on hierarchical control, but engage this shared mechanism differently depending on the current control demand. While linguistic syntax preferably engages the abstract rule-based control circuit, musical syntax rather employs the coordination of the abstract rule-based and the more concrete motor-based control circuits. We provide evidence for our hypothesis by reviewing neuroimaging as well as neuropsychological studies on linguistic and musical syntax. The CHC hypothesis makes a set of novel testable predictions to guide future work on the relationship between language and music.
Collapse
Affiliation(s)
- Rie Asano
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany.
| | - Cedric Boeckx
- Section of General Linguistics, University of Barcelona, Spain; University of Barcelona Institute for Complex Systems (UBICS), Spain; Catalan Institute for Advanced Studies and Research (ICREA), Spain
| | - Uwe Seifert
- Systematic Musicology, Institute of Musicology, University of Cologne, Germany
| |
Collapse
|
23
|
Sihvonen AJ, Ripollés P, Leo V, Saunavaara J, Parkkola R, Rodríguez-Fornells A, Soinila S, Särkämö T. Vocal music listening enhances post-stroke language network reorganization. eNeuro 2021; 8:ENEURO.0158-21.2021. [PMID: 34140351 PMCID: PMC8266215 DOI: 10.1523/eneuro.0158-21.2021] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2021] [Revised: 05/24/2021] [Accepted: 06/06/2021] [Indexed: 11/25/2022] Open
Abstract
Listening to vocal music has been recently shown to improve language recovery in stroke survivors. The neuroplasticity mechanisms supporting this effect are, however, still unknown. Using data from a three-arm single-blind randomized controlled trial including acute stroke patients (N=38) and a 3-month follow-up, we set out to compare the neuroplasticity effects of daily listening to self-selected vocal music, instrumental music, and audiobooks on both brain activity and structural connectivity of the language network. Using deterministic tractography we show that the 3-month intervention induced an enhancement of the microstructural properties of the left frontal aslant tract (FAT) for the vocal music group as compared to the audiobook group. Importantly, this increase in the strength of the structural connectivity of the left FAT correlated with improved language skills. Analyses of stimulus-specific activation changes showed that the vocal music group exhibited increased activations in the frontal termination points of the left FAT during vocal music listening as compared to the audiobook group from acute to 3-month post-stroke stage. The increased activity correlated with the structural neuroplasticity changes in the left FAT. These results suggest that the beneficial effects of vocal music listening on post-stroke language recovery are underpinned by structural neuroplasticity changes within the language network and extend our understanding of music-based interventions in stroke rehabilitation.Significance statementPost-stroke language deficits have a devastating effect on patients and their families. Current treatments yield highly variable outcomes and the evidence for their long-term effects is limited. Patients often receive insufficient treatment that are predominantly given outside the optimal time window for brain plasticity. Post-stroke vocal music listening improves language outcome which is underpinned by neuroplasticity changes within the language network. Vocal music listening provides a complementary rehabilitation strategy which could be safely implemented in the early stages of stroke rehabilitation and seems to specifically target language symptoms and recovering language network.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
- Centre for Clinical Research, The University of Queensland, Australia
| | - Pablo Ripollés
- Department of Psychology, New York University, USA
- Music and Audio Research Laboratory, New York University, USA
- Center for Language Music and emotion, New York UniversityUSA
| | - Vera Leo
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| | - Jani Saunavaara
- Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Riitta Parkkola
- Department of Radiology, Turku University Hospital and University of Turku, Finland
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, University of Barcelona, Spain
- Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Seppo Soinila
- Division of Clinical Neurosciences, Department of Neurology, Turku University Hospital and University of Turku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Finland
| |
Collapse
|
24
|
Beccacece L, Abondio P, Cilli E, Restani D, Luiselli D. Human Genomics and the Biocultural Origin of Music. Int J Mol Sci 2021; 22:5397. [PMID: 34065521 PMCID: PMC8160972 DOI: 10.3390/ijms22105397] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 05/03/2021] [Accepted: 05/18/2021] [Indexed: 12/11/2022] Open
Abstract
Music is an exclusive feature of humankind. It can be considered as a form of universal communication, only partly comparable to the vocalizations of songbirds. Many trends of research in this field try to address music origins, as well as the genetic bases of musicality. On one hand, several hypotheses have been made on the evolution of music and its role, but there is still debate, and comparative studies suggest a gradual evolution of some abilities underlying musicality in primates. On the other hand, genome-wide studies highlight several genes associated with musical aptitude, confirming a genetic basis for different musical skills which humans show. Moreover, some genes associated with musicality are involved also in singing and song learning in songbirds, suggesting a likely evolutionary convergence between humans and songbirds. This comprehensive review aims at presenting the concept of music as a sociocultural manifestation within the current debate about its biocultural origin and evolutionary function, in the context of the most recent discoveries related to the cross-species genetics of musical production and perception.
Collapse
Affiliation(s)
- Livia Beccacece
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Paolo Abondio
- Laboratory of Molecular Anthropology, Department of Biological, Geological and Environmental Sciences, University of Bologna, 40126 Bologna, Italy;
| | - Elisabetta Cilli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donatella Restani
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| | - Donata Luiselli
- Department of Cultural Heritage, University of Bologna—Ravenna Campus, 48121 Ravenna, Italy; (E.C.); (D.R.)
| |
Collapse
|
25
|
Jekiel M, Malarski K. Musical Hearing and Musical Experience in Second Language English Vowel Acquisition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1666-1682. [PMID: 33831309 DOI: 10.1044/2021_jslhr-19-00253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Former studies suggested that music perception can help produce certain accentual features in the first and second language (L2), such as intonational contours. What was missing in many of these studies was the identification of the exact relationship between specific music perception skills and the production of different accentual features in a foreign language. Our aim was to verify whether empirically tested musical hearing skills can be related to the acquisition of English vowels by learners of English as an L2 before and after a formal accent training course. Method Fifty adult Polish speakers of L2 English were tested before and after a two-semester accent training in order to observe the effect of musical hearing on the acquisition of English vowels. Their L2 English vowel formant contours produced in consonant-vowel-consonant context were compared with the target General British vowels produced by their pronunciation teachers. We juxtaposed these results with their musical hearing test scores and self-reported musical experience to observe a possible relationship between successful L2 vowel acquisition and musical aptitude. Results Preexisting rhythmic memory was reported as a significant predictor before training, while musical experience was reported as a significant factor in the production of more native-like L2 vowels after training. We also observed that not all vowels were equally acquired or affected by musical hearing or musical experience. The strongest estimate we observed was the closeness to model before training, suggesting that learners who already managed to acquire some features of a native-like accent were also more successful after training. Conclusions Our results are revealing in two aspects. First, the learners' former proficiency in L2 pronunciation is the most robust predictor in acquiring a native-like accent. Second, there is a potential relationship between rhythmic memory and L2 vowel acquisition before training, as well as years of musical experience after training, suggesting that specific musical skills and music practice can be an asset in learning a foreign language accent.
Collapse
Affiliation(s)
- Mateusz Jekiel
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| | - Kamil Malarski
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| |
Collapse
|
26
|
Podlipniak P. The Role of Canalization and Plasticity in the Evolution of Musical Creativity. Front Neurosci 2021; 15:607887. [PMID: 33796005 PMCID: PMC8007929 DOI: 10.3389/fnins.2021.607887] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2020] [Accepted: 02/24/2021] [Indexed: 11/29/2022] Open
Abstract
Creativity is defined as the ability to generate something new and valuable. From a biological point of view this can be seen as an adaptation in response to environmental challenges. Although music is such a diverse phenomenon, all people possess a set of abilities that are claimed to be the products of biological evolution, which allow us to produce and listen to music according to both universal and culture-specific rules. On the one hand, musical creativity is restricted by the tacit rules that reflect the developmental interplay between genetic, epigenetic and cultural information. On the other hand, musical innovations seem to be desirable elements present in every musical culture which suggests some biological importance. If our musical activity is driven by biological needs, then it is important for us to understand the function of musical creativity in satisfying those needs, and also how human beings have become so creative in the domain of music. The aim of this paper is to propose that musical creativity has become an indispensable part of the gene-culture coevolution of our musicality. It is suggested that the two main forces of canalization and plasticity have been crucial in this process. Canalization is an evolutionary process in which phenotypes take relatively constant forms regardless of environmental and genetic perturbations. Plasticity is defined as the ability of a phenotype to generate an adaptive response to environmental challenges. It is proposed that human musicality is composed of evolutionary innovations generated by the gradual canalization of developmental pathways leading to musical behavior. Within this process, the unstable cultural environment serves as the selective pressure for musical creativity. It is hypothesized that the connections between cortical and subcortical areas, which constitute cortico-subcortical circuits involved in music processing, are the products of canalization, whereas plasticity is achieved by the means of neurological variability. This variability is present both at the level of an individual structure’s enlargement in response to practicing (e.g., the planum temporale) and within the involvement of neurological structures that are not music-specific (e.g., the default mode network) in music processing.
Collapse
Affiliation(s)
- Piotr Podlipniak
- Department of Musicology, Adam Mickiewicz University in Poznań, Poznań, Poland
| |
Collapse
|
27
|
Spatio-temporal dynamics of interictal activity in musicogenic epilepsy: Two case reports and a systematic review of the literature. Clin Neurophysiol 2020; 131:2393-2401. [PMID: 32828042 DOI: 10.1016/j.clinph.2020.06.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Revised: 06/18/2020] [Accepted: 06/21/2020] [Indexed: 11/22/2022]
Abstract
OBJECTIVE To explore neurophysiological features of musicogenic epilepsy (ME), discussing experimental findings in the framework of a systematic review on ME. METHODS Two patients with ME underwent high-density-electroencephalography (hd-EEG) while listening to ictogenic songs. In one case, musicogenic seizures were elicited. Independent component analysis (ICA) was applied to hd-EEG, and components hosting interictal and ictal elements were identified and localized. Finally, the temporal dynamics of spike-density was studied relative to seizures. All findings were compared against the results of a systematic review on ME, collecting 131 cases. RESULTS Interictal spikes appeared isolated in specific fronto-temporal independent components, whose cortical generators were located in the anterior temporal and inferior frontal lobe. In the patient undergoing seizure, ictal discharge relied in the same component, with the interictal spike-density decreasing before the seizure onset. CONCLUSION Our study shows how ICA can isolate neurophysiological features of ictal and interictal discharges in ME, highlighting a fronto-temporal localization and a suppression of spike-density preceding the seizure onset. SIGNIFICANCE While the localization of ME activity could indicate which aspect within the musical stimulus might trigger musicogenic seizures for each patient, the study of ME dynamics could contribute to the development of models for seizure-prediction and their validation.
Collapse
|
28
|
Bouhali F, Mongelli V, Thiebaut de Schotten M, Cohen L. Reading music and words: The anatomical connectivity of musicians' visual cortex. Neuroimage 2020; 212:116666. [PMID: 32087374 DOI: 10.1016/j.neuroimage.2020.116666] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 02/10/2020] [Accepted: 02/17/2020] [Indexed: 10/25/2022] Open
Abstract
Musical score reading and word reading have much in common, from their historical origins to their cognitive foundations and neural correlates. In the ventral occipitotemporal cortex (VOT), the specialization of the so-called Visual Word Form Area for word reading has been linked to its privileged structural connectivity to distant language regions. Here we investigated how anatomical connectivity relates to the segregation of regions specialized for musical notation or words in the VOT. In a cohort of professional musicians and non-musicians, we used probabilistic tractography combined with task-related functional MRI to identify the connections of individually defined word- and music-selective left VOT regions. Despite their close proximity, these regions differed significantly in their structural connectivity, irrespective of musical expertise. The music-selective region was significantly more connected to posterior lateral temporal regions than the word-selective region, which, conversely, was significantly more connected to anterior ventral temporal cortex. Furthermore, musical expertise had a double impact on the connectivity of the music region. First, music tracts were significantly larger in musicians than in non-musicians, associated with marginally higher connectivity to perisylvian music-related areas. Second, the spatial similarity between music and word tracts was significantly increased in musicians, consistently with the increased overlap of language and music functional activations in musicians, as compared to non-musicians. These results support the view that, for music as for words, very specific anatomical connections influence the specialization of distinct VOT areas, and that reciprocally those connections are selectively enhanced by the expertise for word or music reading.
Collapse
Affiliation(s)
- Florence Bouhali
- Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Institut du Cerveau et de la Moelle épinière, ICM, Hôpital de la Pitié-Salpêtrière, 75013, Paris, France; Department of Psychiatry & Weill Institute for Neurosciences, University of California, San Francisco, CA, 94143, USA.
| | - Valeria Mongelli
- Neurobiology of Language Department, Max Planck Institute for Psycholinguistics, Nijmegen, Netherlands; Department of Psychology, University of Amsterdam, Amsterdam, Netherlands; Amsterdam Brain and Cognition (ABC), University of Amsterdam, Amsterdam, Netherlands
| | - Michel Thiebaut de Schotten
- Brain Connectivity and Behaviour Laboratory, Sorbonne Universities, Paris, France; Groupe d'Imagerie Neurofonctionnelle, Institut des Maladies Neurodégénératives-UMR 5293, CNRS, CEA University of Bordeaux, Bordeaux, France
| | - Laurent Cohen
- Sorbonne Université, Inserm U 1127, CNRS UMR 7225, Institut du Cerveau et de la Moelle épinière, ICM, Hôpital de la Pitié-Salpêtrière, 75013, Paris, France; Assistance Publique - Hôpitaux de Paris, Hôpital de la Pitié Salpêtrière, Fédération de Neurologie, F-75013, Paris, France
| |
Collapse
|
29
|
Friederici AD. Hierarchy processing in human neurobiology: how specific is it? Philos Trans R Soc Lond B Biol Sci 2020; 375:20180391. [PMID: 31735144 PMCID: PMC6895560 DOI: 10.1098/rstb.2018.0391] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/15/2019] [Indexed: 12/18/2022] Open
Abstract
Although human and non-human animals share a number of perceptual and cognitive abilities, they differ in their ability to process hierarchically structured sequences. This becomes most evident in the human capacity to process natural language characterized by structural hierarchies. This capacity is neuroanatomically grounded in the posterior part of left Broca's area (Brodmann area (BA) 44), located in the inferior frontal gyrus, and its dorsal white matter fibre connection to the temporal cortex. Within this neural network, BA 44 itself subserves hierarchy building and the strength of its connection to the temporal cortex correlates with the processing of syntactically complex sentences. Whether these brain structures are also relevant for other human cognitive abilities is a current debate. Here, this question will be evaluated with respect to those human cognitive abilities that are assumed to require hierarchy building, such as music, mathematics and Theory of Mind. Rather than supporting a domain-general view, the data indicate domain-selective neural networks as the neurobiological basis for processing hierarchy in different cognitive domains. Recent cross-species white matter comparisons suggest that particular connections within the networks may make the crucial difference in the brain structure of human and non-human primates, thereby enabling cognitive functions specific to humans. This article is part of the theme issue 'What can animal communication teach us about human language?'
Collapse
Affiliation(s)
- Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstr. 1a, 04103 Leipzig, Germany
| |
Collapse
|
30
|
Shared neural resources of rhythm and syntax: An ALE meta-analysis. Neuropsychologia 2019; 137:107284. [PMID: 31783081 DOI: 10.1016/j.neuropsychologia.2019.107284] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Accepted: 11/25/2019] [Indexed: 11/20/2022]
Abstract
A growing body of evidence has highlighted behavioral connections between musical rhythm and linguistic syntax, suggesting that these abilities may be mediated by common neural resources. Here, we performed a quantitative meta-analysis of neuroimaging studies using activation likelihood estimate (ALE) to localize the shared neural structures engaged in a representative set of musical rhythm (rhythm, beat, and meter) and linguistic syntax (merge movement, and reanalysis) operations. Rhythm engaged a bilateral sensorimotor network throughout the brain consisting of the inferior frontal gyri, supplementary motor area, superior temporal gyri/temporoparietal junction, insula, intraparietal lobule, and putamen. By contrast, syntax mostly recruited the left sensorimotor network including the inferior frontal gyrus, posterior superior temporal gyrus, premotor cortex, and supplementary motor area. Intersections between rhythm and syntax maps yielded overlapping regions in the left inferior frontal gyrus, left supplementary motor area, and bilateral insula-neural substrates involved in temporal hierarchy processing and predictive coding. Together, this is the first neuroimaging meta-analysis providing detailed anatomical overlap of sensorimotor regions recruited for musical rhythm and linguistic syntax.
Collapse
|
31
|
Pflug A, Gompf F, Muthuraman M, Groppa S, Kell CA. Differential contributions of the two human cerebral hemispheres to action timing. eLife 2019; 8:e48404. [PMID: 31697640 PMCID: PMC6837842 DOI: 10.7554/elife.48404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 10/08/2019] [Indexed: 01/22/2023] Open
Abstract
Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
Collapse
Affiliation(s)
- Anja Pflug
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Florian Gompf
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Muthuraman Muthuraman
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Sergiu Groppa
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Christian Alexander Kell
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| |
Collapse
|
32
|
Polyanskaya L, Samuel AG, Ordin M. Speech Rhythm Convergence as a Social Coalition Signal. EVOLUTIONARY PSYCHOLOGY 2019; 17:1474704919879335. [PMID: 31564124 PMCID: PMC10480829 DOI: 10.1177/1474704919879335] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Accepted: 09/06/2019] [Indexed: 10/25/2022] Open
Abstract
Patterns of nonverbal and verbal behavior of interlocutors become more similar as communication progresses. Rhythm entrainment promotes prosocial behavior and signals social bonding and cooperation. Yet, it is unknown if the convergence of rhythm in human speech is perceived and is used to make pragmatic inferences regarding the cooperative urge of the interactors. We conducted two experiments to answer this question. For analytical purposes, we separate pulse (recurring acoustic events) and meter (hierarchical structuring of pulses based on their relative salience). We asked the listeners to make judgments on the hostile or collaborative attitude of interacting agents who exhibit different or similar pulse (Experiment 1) or meter (Experiment 2). The results suggest that rhythm convergence can be a marker of social cooperation at the level of pulse, but not at the level of meter. The mapping of rhythmic convergence onto social affiliation or opposition is important at the early stages of language acquisition. The evolutionary origin of this faculty is possibly the need to transmit and perceive coalition information in social groups of human ancestors. We suggest that this faculty could promote the emergence of the speech faculty in humans.
Collapse
Affiliation(s)
- Leona Polyanskaya
- BCBL—Basque Centre on Cognition, Brain and Language, Donostia, Spain
| | - Arthur G. Samuel
- BCBL—Basque Centre on Cognition, Brain and Language, Donostia, Spain
- IKERBASQUE—Basque Foundation for Science, Bilbao, Spain
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| | - Mikhail Ordin
- BCBL—Basque Centre on Cognition, Brain and Language, Donostia, Spain
- IKERBASQUE—Basque Foundation for Science, Bilbao, Spain
| |
Collapse
|
33
|
Politimou N, Dalla Bella S, Farrugia N, Franco F. Born to Speak and Sing: Musical Predictors of Language Development in Pre-schoolers. Front Psychol 2019; 10:948. [PMID: 31231260 PMCID: PMC6558368 DOI: 10.3389/fpsyg.2019.00948] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2019] [Accepted: 04/09/2019] [Indexed: 11/13/2022] Open
Abstract
The relationship between musical and linguistic skills has received particular attention in infants and school-aged children. However, very little is known about pre-schoolers. This leaves a gap in our understanding of the concurrent development of these skills during development. Moreover, attention has been focused on the effects of formal musical training, while neglecting the influence of informal musical activities at home. To address these gaps, in Study 1, 3- and 4-year-old children (n = 40) performed novel musical tasks (perception and production) adapted for young children in order to examine the link between musical skills and the development of key language capacities, namely grammar and phonological awareness. In Study 2, we investigated the influence of informal musical experience at home on musical and linguistic skills of young pre-schoolers, using the same evaluation tools. We found systematic associations between distinct musical and linguistic skills. Rhythm perception and production were the best predictors of phonological awareness, while melody perception was the best predictor of grammar acquisition, a novel association not previously observed in developmental research. These associations could not be explained by variability in general cognitive functioning, such as verbal memory and non-verbal abilities. Thus, selective music-related auditory and motor skills are likely to underpin different aspects of language development and can be dissociated in pre-schoolers. We also found that informal musical experience at home contributes to the development of grammar. An effect of musical skills on both phonological awareness and language grammar is mediated by home musical experience. These findings pave the way for the development of dedicated musical activities for pre-schoolers to support specific areas of language development.
Collapse
Affiliation(s)
- Nina Politimou
- Department of Psychology, Middlesex University, London, United Kingdom
| | - Simone Dalla Bella
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada
- Department of Psychology, University of Montreal, Montreal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada
- Department of Cognitive Psychology, University of Economics and Human Sciences in Warsaw, Warsaw, Poland
| | - Nicolas Farrugia
- Lab-STICC, Department of Electronics, IMT Atlantique, Brest, France
| | - Fabia Franco
- Department of Psychology, Middlesex University, London, United Kingdom
| |
Collapse
|
34
|
Tanaka S, Kirino E. Increased Functional Connectivity of the Angular Gyrus During Imagined Music Performance. Front Hum Neurosci 2019; 13:92. [PMID: 30936827 PMCID: PMC6431621 DOI: 10.3389/fnhum.2019.00092] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Accepted: 02/27/2019] [Indexed: 11/26/2022] Open
Abstract
The angular gyrus (AG) is a hub of several networks that are involved in various functions, including attention, self-processing, semantic information processing, emotion regulation, and mentalizing. Since these functions are required in music performance, it is likely that the AG plays a role in music performance. Considering that these functions emerge as network properties, this study analyzed the functional connectivity of the AG during the imagined music performance task and the resting condition. Our hypothesis was that the functional connectivity of the AG is modulated by imagined music performance. In the resting condition, the AG had connections with the medial prefrontal cortex (mPFC), posterior cingulate cortex (PCC), and precuneus as well as the superior and inferior frontal gyri and with the temporal cortex. Compared with the resting condition, imagined music performance increased the functional connectivity of the AG with the superior frontal gyrus (SFG), mPFC, precuneus, PCC, hippocampal/parahippocampal gyrus (H/PHG), and amygdala. The anterior cingulate cortex (ACC) and superior temporal gyrus (STG) were newly engaged or added to the AG network during the task. In contrast, the supplementary motor area (SMA), sensorimotor areas, and occipital regions, which were anti-correlated with the AG in the resting condition, were disengaged during the task. These results lead to the conclusion that the functional connectivity of the AG is modulated by imagined music performance, which suggests that the AG plays a role in imagined music performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|
35
|
Fiveash A, McArthur G, Thompson WF. Syntactic and non-syntactic sources of interference by music on language processing. Sci Rep 2018; 8:17918. [PMID: 30559400 PMCID: PMC6297162 DOI: 10.1038/s41598-018-36076-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2018] [Accepted: 11/08/2018] [Indexed: 11/09/2022] Open
Abstract
Music and language are complex hierarchical systems in which individual elements are systematically combined to form larger, syntactic structures. Suggestions that music and language share syntactic processing resources have relied on evidence that syntactic violations in music interfere with syntactic processing in language. However, syntactic violations may affect auditory processing in non-syntactic ways, accounting for reported interference effects. To investigate the factors contributing to interference effects, we assessed recall of visually presented sentences and word-lists when accompanied by background auditory stimuli differing in syntactic structure and auditory distraction: melodies without violations, scrambled melodies, melodies that alternate in timbre, and environmental sounds. In Experiment 1, one-timbre melodies interfered with sentence recall, and increasing both syntactic complexity and distraction by scrambling melodies increased this interference. In contrast, three-timbre melodies reduced interference on sentence recall, presumably because alternating instruments interrupted auditory streaming, reducing pressure on long-distance syntactic structure building. Experiment 2 confirmed that participants were better at discriminating syntactically coherent one-timbre melodies than three-timbre melodies. Together, these results illustrate that syntactic processing and auditory streaming interact to influence sentence recall, providing implications for theories of shared syntactic processing and auditory distraction.
Collapse
Affiliation(s)
- Anna Fiveash
- Department of Psychology, Macquarie University, Sydney, Australia.
- Lyon Neuroscience Research Centre, Auditory Cognition and Psychoacoustics Team and Dynamique Du Langage Laboratory, INSERM, U1028, CNRS, UMR5292, Lyon, France.
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia.
| | - Genevieve McArthur
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
- Department of Cognitive Science, Macquarie University, Sydney, Australia
| | - William Forde Thompson
- Department of Psychology, Macquarie University, Sydney, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Macquarie University, Sydney, Australia
| |
Collapse
|
36
|
Chiang JN, Rosenberg MH, Bufford CA, Stephens D, Lysy A, Monti MM. The language of music: Common neural codes for structured sequences in music and natural language. BRAIN AND LANGUAGE 2018; 185:30-37. [PMID: 30086421 DOI: 10.1016/j.bandl.2018.07.003] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 07/04/2018] [Accepted: 07/15/2018] [Indexed: 06/08/2023]
Abstract
The ability to process structured sequences is a central feature of natural language but also characterizes many other domains of human cognition. In this fMRI study, we measured brain metabolic response in musicians as they generated structured and non-structured sequences in language and music. We employed a univariate and multivariate cross-classification approach to provide evidence that a common neural code underlies the production of structured sequences across the two domains. Crucially, the common substrate includes Broca's area, a region well known for processing structured sequences in language. These findings have several implications. First, they directly support the hypothesis that language and music share syntactic integration mechanisms. Second, they show that Broca's area is capable of operating supramodally across these two domains. Finally, these results dismiss the recent hypothesis that domain general processes of neighboring neural substrates explain the previously observed "overlap" between neuroimaging activations across the two domains.
Collapse
Affiliation(s)
- Jeffrey N Chiang
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA
| | - Matthew H Rosenberg
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA
| | - Carolyn A Bufford
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA
| | - Daniel Stephens
- Department of Music, UCLA Herb Alpert School of Music, University of California Los Angeles, Los Angeles, CA, USA
| | - Antonio Lysy
- Department of Music, UCLA Herb Alpert School of Music, University of California Los Angeles, Los Angeles, CA, USA
| | - Martin M Monti
- Department of Psychology, University of California Los Angeles, Los Angeles, CA, USA.
| |
Collapse
|
37
|
Heaton P, Tsang WF, Jakubowski K, Mullensiefen D, Allen R. Discriminating autism and language impairment and specific language impairment through acuity of musical imagery. RESEARCH IN DEVELOPMENTAL DISABILITIES 2018; 80:52-63. [PMID: 29913330 DOI: 10.1016/j.ridd.2018.06.001] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 06/05/2018] [Accepted: 06/07/2018] [Indexed: 06/08/2023]
Abstract
Deficits in auditory short-term memory have been widely reported in children with Specific Language Impairment (SLI), and recent evidence suggests that children with Autism Spectrum Disorder and co-morbid language impairment (ALI) experience similar difficulties. Music, like language relies on auditory memory and the aim of the study was to extend work investigating the impact of auditory short-term memory impairments to musical perception in children with neurodevelopmental disorders. Groups of children with SLI and ALI were matched on chronological age (CA), receptive vocabulary, non-verbal intelligence and digit span, and compared with CA matched typically developing (TD) controls, on tests of pitch and temporal acuity within a voluntary musical imagery paradigm. The SLI participants performed at significantly lower levels than the ALI and TD groups on both conditions of the task and their musical imagery and digit span scores were positively correlated. In contrast ALI participants performed as well as TD controls on the tempo condition and better than TD controls on the pitch condition of the task. Whilst auditory short-term memory and receptive vocabulary impairments were similar across ALI and SLI groups, these were not associated with a deficit in voluntary musical imagery performance in the ALI group.
Collapse
Affiliation(s)
- Pamela Heaton
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom.
| | - Wai Fung Tsang
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Kelly Jakubowski
- Music, University of Durham, Palace Green, Durham, DH1 3RL, United Kingdom
| | - Daniel Mullensiefen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| | - Rory Allen
- Psychology, Goldsmiths University of London, New Cross, London, SE14 6NW, United Kingdom
| |
Collapse
|
38
|
Syntactic processing in music and language: Effects of interrupting auditory streams with alternating timbres. Int J Psychophysiol 2018; 129:31-40. [DOI: 10.1016/j.ijpsycho.2018.05.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2017] [Revised: 05/05/2018] [Accepted: 05/07/2018] [Indexed: 02/08/2023]
|
39
|
Sun Y, Lu X, Ho HT, Johnson BW, Sammler D, Thompson WF. Syntactic processing in music and language: Parallel abnormalities observed in congenital amusia. NEUROIMAGE-CLINICAL 2018; 19:640-651. [PMID: 30013922 PMCID: PMC6022360 DOI: 10.1016/j.nicl.2018.05.032] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/21/2017] [Revised: 05/22/2018] [Accepted: 05/23/2018] [Indexed: 11/23/2022]
Abstract
Evidence is accumulating that similar cognitive resources are engaged to process syntactic structure in music and language. Congenital amusia – a neurodevelopmental disorder that primarily affects music perception, including musical syntax – provides a special opportunity to understand the nature of this overlap. Using electroencephalography (EEG), we investigated whether individuals with congenital amusia have parallel deficits in processing language syntax in comparison to control participants. Twelve amusic participants (eight females) and 12 control participants (eight females) were presented melodies in one session, and spoken sentences in another session, both of which had syntactic-congruent and -incongruent stimuli. They were asked to complete a music-related and a language-related task that were irrelevant to the syntactic incongruities. Our results show that amusic participants exhibit impairments in the early stages of both music- and language-syntactic processing. Specifically, we found that two event-related potential (ERP) components – namely Early Right Anterior Negativity (ERAN) and Left Anterior Negativity (LAN), associated with music- and language-syntactic processing respectively, were absent in the amusia group. However, at later processing stages, amusics showed similar brain responses as controls to syntactic incongruities in both music and language. This was reflected in a normal N5 in response to melodies and a normal P600 to spoken sentences. Notably, amusics' parallel music- and language-syntactic impairments were not accompanied by deficits in semantic processing (indexed by normal N400 in response to semantic incongruities). Together, our findings provide further evidence for shared music and language syntactic processing, particularly at early stages of processing. Amusics displayed abnormal brain responses to music-syntactic irregularities. They also exhibited abnormal brain responses to language-syntactic irregularities. These impairments affect an early stage of syntactic processing not a later stage. Music and language involve similar cognitive mechanisms for processing syntax.
Collapse
Affiliation(s)
- Yanan Sun
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia.
| | - Xuejing Lu
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia; CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Hao Tam Ho
- Department of Translational Research on New Technologies in Medicine and Surgery, University of Pisa, Pisa 56126, Italy; School of Psychology, University of Sydney, New South Wales 2006, Australia
| | - Blake W Johnson
- Department of Cognitive Science, Macquarie University, New South Wales 2109, Australia; ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia
| | - Daniela Sammler
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, New South Wales 2109, Australia; Department of Psychology, Macquarie University, New South Wales 2109, Australia
| |
Collapse
|
40
|
Abstract
Over tens of thousands of years of human genetic and cultural evolution, many types and varieties of music and language have emerged; however, the fundamental components of each of these modes of communication seem to be common to all human cultures and social groups. In this brief review, rather than focusing on the development of different musical techniques and practices over time, the main issues addressed here concern: (i) when, and speculations as to why, modern Homo sapiens evolved musical behaviors, (ii) the evolutionary relationship between music and language, and (iii) why humans, perhaps unique among all living species, universally continue to possess two complementary but distinct communication streams. Did music exist before language, or vice versa, or was there a common precursor that in some way separated into two distinct yet still overlapping systems when cognitively modern H. sapiens evolved? A number of theories put forward to explain the origin and persistent universality of music are considered, but emphasis is given, supported by recent neuroimaging, physiological, and psychological findings, to the role that music can play in promoting trust, altruistic behavior, social bonding, and cooperation within groups of culturally compatible but not necessarily genetically related humans. It is argued that, early in our history, the unique socializing and harmonizing power of music acted as an essential counterweight to the new and evolving sense of self, to an emerging sense of individuality and mortality that was linked to the development of an advanced cognitive capacity and articulate language capability.
Collapse
Affiliation(s)
- Alan R Harvey
- School of Human Sciences, The University of Western Australia, Perron Institute for Neurological and Translational Science, Perth, WA, Australia
| |
Collapse
|
41
|
The right inferior frontal gyrus processes nested non-local dependencies in music. Sci Rep 2018; 8:3822. [PMID: 29491454 PMCID: PMC5830458 DOI: 10.1038/s41598-018-22144-9] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 02/16/2018] [Indexed: 12/01/2022] Open
Abstract
Complex auditory sequences known as music have often been described as hierarchically structured. This permits the existence of non-local dependencies, which relate elements of a sequence beyond their temporal sequential order. Previous studies in music have reported differential activity in the inferior frontal gyrus (IFG) when comparing regular and irregular chord-transitions based on theories in Western tonal harmony. However, it is unclear if the observed activity reflects the interpretation of hierarchical structure as the effects are confounded by local irregularity. Using functional magnetic resonance imaging (fMRI), we found that violations to non-local dependencies in nested sequences of three-tone musical motifs in musicians elicited increased activity in the right IFG. This is in contrast to similar studies in language which typically report the left IFG in processing grammatical syntax. Effects of increasing auditory working demands are moreover reflected by distributed activity in frontal and parietal regions. Our study therefore demonstrates the role of the right IFG in processing non-local dependencies in music, and suggests that hierarchical processing in different cognitive domains relies on similar mechanisms that are subserved by domain-selective neuronal subpopulations.
Collapse
|
42
|
Tanaka S, Kirino E. Dynamic Reconfiguration of the Supplementary Motor Area Network during Imagined Music Performance. Front Hum Neurosci 2017; 11:606. [PMID: 29311870 PMCID: PMC5732967 DOI: 10.3389/fnhum.2017.00606] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2017] [Accepted: 11/28/2017] [Indexed: 11/18/2022] Open
Abstract
The supplementary motor area (SMA) has been shown to be the center for motor planning and is active during music listening and performance. However, limited data exist on the role of the SMA in music. Music performance requires complex information processing in auditory, visual, spatial, emotional, and motor domains, and this information is integrated for the performance. We hypothesized that the SMA is engaged in multimodal integration of information, distributed across several regions of the brain to prepare for ongoing music performance. To test this hypothesis, functional networks involving the SMA were extracted from functional magnetic resonance imaging (fMRI) data that were acquired from musicians during imagined music performance and during the resting state. Compared with the resting condition, imagined music performance increased connectivity of the SMA with widespread regions in the brain including the sensorimotor cortices, parietal cortex, posterior temporal cortex, occipital cortex, and inferior and dorsolateral prefrontal cortex. Increased connectivity of the SMA with the dorsolateral prefrontal cortex suggests that the SMA is under cognitive control, while increased connectivity with the inferior prefrontal cortex suggests the involvement of syntax processing. Increased connectivity with the parietal cortex, posterior temporal cortex, and occipital cortex is likely for the integration of spatial, emotional, and visual information. Finally, increased connectivity with the sensorimotor cortices was potentially involved with the translation of thought planning into motor programs. Therefore, the reconfiguration of the SMA network observed in this study is considered to reflect the multimodal integration required for imagined and actual music performance. We propose that the SMA network construct “the internal representation of music performance” by integrating multimodal information required for the performance.
Collapse
Affiliation(s)
- Shoji Tanaka
- Department of Information and Communication Sciences, Sophia University, Tokyo, Japan
| | - Eiji Kirino
- Department of Psychiatry, School of Medicine, Juntendo University, Tokyo, Japan.,Department of Psychiatry, Juntendo Shizuoka Hospital, Shizuoka, Japan
| |
Collapse
|
43
|
Akrami H, Moghimi S. Culture Modulates the Brain Response to Harmonic Violations: An EEG Study on Hierarchical Syntactic Structure in Music. Front Hum Neurosci 2017; 11:591. [PMID: 29270118 PMCID: PMC5723651 DOI: 10.3389/fnhum.2017.00591] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 11/21/2017] [Indexed: 11/21/2022] Open
Abstract
We investigated the role of culture in processing hierarchical syntactic structures in music. We examined whether violation of non-local dependencies manifest in event related potentials (ERP) for Western and Iranian excerpts by recording EEG while participants passively listened to sequences of modified/original excerpts. We also investigated oscillatory and synchronization properties of brain responses during processing of hierarchical structures. For the Western excerpt, subjective ratings of conclusiveness were marginally significant and the difference in the ERP components fell short of significance. However, ERP and behavioral results showed that while listening to culturally familiar music, subjects comprehended whether or not the hierarchical syntactic structure was fulfilled. Irregularities in the hierarchical structures of the Iranian excerpt elicited an early negativity in the central regions bilaterally, followed by two later negativities from 450–700 to 750–950 ms. The latter manifested throughout the scalp. Moreover, violations of hierarchical structure in the Iranian excerpt were associated with (i) an early decrease in the long range alpha phase synchronization, (ii) an early increase in the oscillatory activity in the beta band over the central areas, and (iii) a late decrease in the theta band phase synchrony between left anterior and right posterior regions. Results suggest that rhythmic structures and melodic fragments, representative of Iranian music, created a familiar context in which recognition of complex non-local syntactic structures was feasible for Iranian listeners. Processing of neural responses to the Iranian excerpt indicated neural mechanisms for processing of hierarchical syntactic structures in music at different levels of cortical integration.
Collapse
Affiliation(s)
- Haleh Akrami
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran
| | - Sahar Moghimi
- Electrical Engineering Department, Ferdowsi University of Mashhad, Mashhad, Iran.,Rayan Center for Neuroscience and Behavior, Ferdowsi University of Mashhad, Mashhad, Iran
| |
Collapse
|
44
|
Sihvonen AJ, Ripollés P, Rodríguez-Fornells A, Soinila S, Särkämö T. Revisiting the Neural Basis of Acquired Amusia: Lesion Patterns and Structural Changes Underlying Amusia Recovery. Front Neurosci 2017; 11:426. [PMID: 28790885 PMCID: PMC5524924 DOI: 10.3389/fnins.2017.00426] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2017] [Accepted: 07/11/2017] [Indexed: 01/25/2023] Open
Abstract
Although, acquired amusia is a common deficit following stroke, relatively little is still known about its precise neural basis, let alone to its recovery. Recently, we performed a voxel-based lesion-symptom mapping (VLSM) and morphometry (VBM) study which revealed a right lateralized lesion pattern, and longitudinal gray matter volume (GMV) and white matter volume (WMV) changes that were specifically associated with acquired amusia after stroke. In the present study, using a larger sample of stroke patients (N = 90), we aimed to replicate and extend the previous structural findings as well as to determine the lesion patterns and volumetric changes associated with amusia recovery. Structural MRIs were acquired at acute and 6-month post-stroke stages. Music perception was behaviorally assessed at acute and 3-month post-stroke stages using the Scale and Rhythm subtests of the Montreal Battery of Evaluation of Amusia (MBEA). Using these scores, the patients were classified as non-amusic, recovered amusic, and non-recovered amusic. The results of the acute stage VLSM analyses and the longitudinal VBM analyses converged to show that more severe and persistent (non-recovered) amusia was associated with an extensive pattern of lesions and GMV/WMV decrease in right temporal, frontal, parietal, striatal, and limbic areas. In contrast, less severe and transient (recovered) amusia was linked to lesions specifically in left inferior frontal gyrus as well as to a GMV decrease in right parietal areas. Separate continuous analyses of MBEA Scale and Rhythm scores showed extensively overlapping lesion pattern in right temporal, frontal, and subcortical structures as well as in the right insula. Interestingly, the recovered pitch amusia was related to smaller GMV decreases in the temporoparietal junction whereas the recovered rhythm amusia was associated to smaller GMV decreases in the inferior temporal pole. Overall, the results provide a more comprehensive picture of the lesions and longitudinal structural changes associated with different recovery trajectories of acquired amusia.
Collapse
Affiliation(s)
- Aleksi J Sihvonen
- Faculty of Medicine, University of TurkuTurku, Finland.,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| | - Pablo Ripollés
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Poeppel Lab, Department of Psychology, New York UniversityNew York, NY, United States
| | - Antoni Rodríguez-Fornells
- Cognition and Brain Plasticity Group, Bellvitge Biomedical Research Institute (IDIBELL), L'Hospitalet de LlobregatBarcelona, Spain.,Department of Cognition, Development and Education Psychology, University of BarcelonaBarcelona, Spain.,Catalan Institution for Research and Advanced Studies, Institució Catalana de Recerca i Estudis Avançats (ICREA)Barcelona, Spain
| | - Seppo Soinila
- Division of Clinical Neurosciences, Turku University Hospital and Department of Neurology, University of TurkuTurku, Finland
| | - Teppo Särkämö
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of HelsinkiHelsinki, Finland
| |
Collapse
|
45
|
Yu M, Xu M, Li X, Chen Z, Song Y, Liu J. The shared neural basis of music and language. Neuroscience 2017; 357:208-219. [PMID: 28602921 DOI: 10.1016/j.neuroscience.2017.06.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2016] [Revised: 05/27/2017] [Accepted: 06/01/2017] [Indexed: 02/03/2023]
Abstract
Human musical ability is proposed to play a key phylogenetical role in the evolution of language, and the similarity of hierarchical structure in music and language has led to considerable speculation about their shared mechanisms. While behavioral and electrophysioglocial studies have revealed associations between music and linguistic abilities, results from functional magnetic resonance imaging (fMRI) studies on their relations are contradictory, possibly because these studies usually treat music or language as single entities without breaking down to their components. Here, we examined the relations between different components of music (i.e., melodic and rhythmic analysis) and language (i.e., semantic and phonological processing) using both behavioral tests and resting-state fMRI. Behaviorally, we found that individuals with music training experiences were better at semantic processing, but not at phonological processing, than those without training. Further correlation analyses showed that semantic processing of language was related to melodic, but not rhythmic, analysis of music. Neurally, we found that performances in both semantic processing and melodic analysis were correlated with spontaneous brain activities in the bilateral precentral gyrus (PCG) and superior temporal plane at the regional level, and with the resting-state functional connectivity of the left PCG with the left supramarginal gyrus and left superior temporal gyrus at the network level. Together, our study revealed the shared spontaneous neural basis of music and language based on the behavioral link between melodic analysis and semantic processing, which possibly relied on a common mechanism of automatic auditory-motor integration.
Collapse
Affiliation(s)
- Mengxia Yu
- School of Psychology, Beijing Normal University, Beijing 100875, China
| | - Miao Xu
- School of Psychology, Beijing Normal University, Beijing 100875, China
| | - Xueting Li
- Department of Psychology, Renmin University of China, Beijing 100872, China
| | - Zhencai Chen
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yiying Song
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| | - Jia Liu
- School of Psychology, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
46
|
Slevc LR, Faroqi-Shah Y, Saxena S, Okada BM. Preserved processing of musical structure in a person with agrammatic aphasia. Neurocase 2016; 22:505-511. [PMID: 27112951 DOI: 10.1080/13554794.2016.1177090] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Evidence for shared processing of structure (or syntax) in language and in music conflicts with neuropsychological dissociations between the two. However, while harmonic structural processing can be impaired in patients with spared linguistic syntactic abilities (Peretz, I. (1993). Auditory atonalia for melodies. Cognitive Neuropsychology, 10, 21-56. doi:10.1080/02643299308253455), evidence for the opposite dissociation-preserved harmonic processing despite agrammatism-is largely lacking. Here, we report one such case: HV, a former musician with Broca's aphasia and agrammatic speech, was impaired in making linguistic, but not musical, acceptability judgments. Similarly, she showed no sensitivity to linguistic structure, but normal sensitivity to musical structure, in implicit priming tasks. To our knowledge, this is the first non-anecdotal report of a patient with agrammatic aphasia demonstrating preserved harmonic processing abilities, supporting claims that aspects of musical and linguistic structure rely on distinct neural mechanisms.
Collapse
Affiliation(s)
- L Robert Slevc
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Yasmeen Faroqi-Shah
- b Department of Hearing and Speech Sciences , University of Maryland , College Park , Maryland , USA
| | - Sadhvi Saxena
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| | - Brooke M Okada
- a Department of Psychology , University of Maryland , College Park , Maryland , USA
| |
Collapse
|
47
|
Tremblay P, Dick AS. Broca and Wernicke are dead, or moving past the classic model of language neurobiology. BRAIN AND LANGUAGE 2016; 162:60-71. [PMID: 27584714 DOI: 10.1016/j.bandl.2016.08.004] [Citation(s) in RCA: 224] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/07/2016] [Revised: 06/20/2016] [Accepted: 08/16/2016] [Indexed: 05/04/2023]
Abstract
With the advancement of cognitive neuroscience and neuropsychological research, the field of language neurobiology is at a cross-roads with respect to its framing theories. The central thesis of this article is that the major historical framing model, the Classic "Wernicke-Lichtheim-Geschwind" model, and associated terminology, is no longer adequate for contemporary investigations into the neurobiology of language. We argue that the Classic model (1) is based on an outdated brain anatomy; (2) does not adequately represent the distributed connectivity relevant for language, (3) offers a modular and "language centric" perspective, and (4) focuses on cortical structures, for the most part leaving out subcortical regions and relevant connections. To make our case, we discuss the issue of anatomical specificity with a focus on the contemporary usage of the terms "Broca's and Wernicke's area", including results of a survey that was conducted within the language neurobiology community. We demonstrate that there is no consistent anatomical definition of "Broca's and Wernicke's Areas", and propose to replace these terms with more precise anatomical definitions. We illustrate the distributed nature of the language connectome, which extends far beyond the single-pathway notion of arcuate fasciculus connectivity established in Geschwind's version of the Classic Model. By illustrating the definitional confusion surrounding "Broca's and Wernicke's areas", and by illustrating the difficulty integrating the emerging literature on perisylvian white matter connectivity into this model, we hope to expose the limits of the model, argue for its obsolescence, and suggest a path forward in defining a replacement.
Collapse
Affiliation(s)
- Pascale Tremblay
- Département de Réadaptation, Faculté de Médecine, Université Laval, Québec City, QC, Canada; Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec, Québec City, QC, Canada
| | | |
Collapse
|
48
|
Zioga I, Di Bernardi Luft C, Bhattacharya J. Musical training shapes neural responses to melodic and prosodic expectation. Brain Res 2016; 1650:267-282. [PMID: 27622645 PMCID: PMC5069926 DOI: 10.1016/j.brainres.2016.09.015] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 09/01/2016] [Accepted: 09/09/2016] [Indexed: 11/15/2022]
Abstract
Current research on music processing and syntax or semantics in language suggests that music and language share partially overlapping neural resources. Pitch also constitutes a common denominator, forming melody in music and prosody in language. Further, pitch perception is modulated by musical training. The present study investigated how music and language interact on pitch dimension and whether musical training plays a role in this interaction. For this purpose, we used melodies ending on an expected or unexpected note (melodic expectancy being estimated by a computational model) paired with prosodic utterances which were either expected (statements with falling pitch) or relatively unexpected (questions with rising pitch). Participants' (22 musicians, 20 nonmusicians) ERPs and behavioural responses in a statement/question discrimination task were recorded. Participants were faster for simultaneous expectancy violations in the melodic and linguistic stimuli. Further, musicians performed better than nonmusicians, which may be related to their increased pitch tracking ability. At the neural level, prosodic violations elicited a front-central positive ERP around 150 ms after the onset of the last word/note, while musicians presented reduced P600 in response to strong incongruities (questions on low-probability notes). Critically, musicians' P800 amplitudes were proportional to their level of musical training, suggesting that expertise might shape the pitch processing of language. The beneficial aspect of expertise could be attributed to its strengthening effect of general executive functions. These findings offer novel contributions to our understanding of shared higher-order mechanisms between music and language processing on pitch dimension, and further demonstrate a potential modulation by musical expertise. Melodic expectancy influences the processing of prosodic expectancy. Musical expertise modulates pitch processing in music and language. Musicians have a more refined response to pitch. Musicians' neural responses are proportional to their level of musical expertise. Possible association between the P200 neural component and behavioural facilitation.
Collapse
Affiliation(s)
- Ioanna Zioga
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom.
| | - Caroline Di Bernardi Luft
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom; School of Biological and Chemical Sciences, Queen Mary, University of London, Mile End Rd, London E1 4NS, United Kingdom
| | - Joydeep Bhattacharya
- Department of Psychology, Goldsmiths, University of London, New Cross, London SE14 6NW, United Kingdom
| |
Collapse
|
49
|
Patel AD, Morgan E. Exploring Cognitive Relations Between Prediction in Language and Music. Cogn Sci 2016; 41 Suppl 2:303-320. [DOI: 10.1111/cogs.12411] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2015] [Revised: 06/05/2016] [Accepted: 06/14/2016] [Indexed: 02/04/2023]
Affiliation(s)
- Aniruddh D. Patel
- Department of Psychology; Tufts University
- Azrieli Program in Brain, Mind, & Consciousness; Canadian Institute for Advanced Research (CIFAR); Toronto
| | | |
Collapse
|
50
|
Using music to study the evolution of cognitive mechanisms relevant to language. Psychon Bull Rev 2016; 24:177-180. [DOI: 10.3758/s13423-016-1088-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|