1
|
Sun M, Xing W, Yu W, Slevc LR, Li W. ERP evidence for cross-domain prosodic priming from music to speech. BRAIN AND LANGUAGE 2024; 254:105439. [PMID: 38945108 DOI: 10.1016/j.bandl.2024.105439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2023] [Revised: 06/19/2024] [Accepted: 06/25/2024] [Indexed: 07/02/2024]
Abstract
Considerable work has investigated similarities between the processing of music and language, but it remains unclear whether typical, genuine music can influence speech processing via cross-domain priming. To investigate this, we measured ERPs to musical phrases and to syntactically ambiguous Chinese phrases that could be disambiguated by early or late prosodic boundaries. Musical primes also had either early or late prosodic boundaries and we asked participants to judge whether the prime and target have the same structure. Within musical phrases, prosodic boundaries elicited reduced N1 and enhanced P2 components (relative to the no-boundary condition) and musical phrases with late boundaries exhibited a closure positive shift (CPS) component. More importantly, primed target phrases elicited a smaller CPS compared to non-primed phrases, regardless of the type of ambiguous phrase. These results suggest that prosodic priming can occur across domains, supporting the existence of common neural processes in music and language processing.
Collapse
Affiliation(s)
- Mingjiang Sun
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Weijing Xing
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenjing Yu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD, USA.
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Huanghe Road 850, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
2
|
Bellmann OT, Asano R. Neural correlates of musical timbre: an ALE meta-analysis of neuroimaging data. Front Neurosci 2024; 18:1373232. [PMID: 38952924 PMCID: PMC11215185 DOI: 10.3389/fnins.2024.1373232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/29/2024] [Indexed: 07/03/2024] Open
Abstract
Timbre is a central aspect of music that allows listeners to identify musical sounds and conveys musical emotion, but also allows for the recognition of actions and is an important structuring property of music. The former functions are known to be implemented in a ventral auditory stream in processing musical timbre. While the latter functions are commonly attributed to areas in a dorsal auditory processing stream in other musical domains, its involvement in musical timbre processing is so far unknown. To investigate if musical timbre processing involves both dorsal and ventral auditory pathways, we carried out an activation likelihood estimation (ALE) meta-analysis of 18 experiments from 17 published neuroimaging studies on musical timbre perception. We identified consistent activations in Brodmann areas (BA) 41, 42, and 22 in the bilateral transverse temporal gyri, the posterior superior temporal gyri and planum temporale, in BA 40 of the bilateral inferior parietal lobe, in BA 13 in the bilateral posterior Insula, and in BA 13 and 22 in the right anterior insula and superior temporal gyrus. The vast majority of the identified regions are associated with the dorsal and ventral auditory processing streams. We therefore propose to frame the processing of musical timbre in a dual-stream model. Moreover, the regions activated in processing timbre show similarities to the brain regions involved in processing several other fundamental aspects of music, indicating possible shared neural bases of musical timbre and other musical domains.
Collapse
Affiliation(s)
| | - Rie Asano
- Systematic Musicology, Institute for Musicology, University of Cologne, Cologne, Germany
| |
Collapse
|
3
|
Morita M, Nishikawa Y, Tokumasu Y. Human musical capacity and products should have been induced by the hominin-specific combination of several biosocial features: A three-phase scheme on socio-ecological, cognitive, and cultural evolution. Evol Anthropol 2024:e22031. [PMID: 38757853 DOI: 10.1002/evan.22031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2024] [Revised: 04/14/2024] [Accepted: 04/26/2024] [Indexed: 05/18/2024]
Abstract
Various selection pressures have shaped human uniqueness, for instance, music. When and why did musical universality and diversity emerge? Our hypothesis is that "music" initially originated from manipulative calls with limited musical elements. Thereafter, vocalizations became more complex and flexible along with a greater degree of social learning. Finally, constructed musical instruments and the language faculty resulted in diverse and context-specific music. Music precursors correspond to vocal communication among nonhuman primates, songbirds, and cetaceans. To place this scenario in hominin history, a three-phase scheme for music evolution is presented herein. We emphasize (1) the evolution of sociality and life history in australopithecines, (2) the evolution of cognitive and learning abilities in early/middle Homo, and (3) cultural evolution, primarily in Homo sapiens. Human musical capacity and products should be due to the hominin-specific combination of several biosocial features, including bipedalism, stable pair bonding, alloparenting, expanded brain size, and sexual selection.
Collapse
Affiliation(s)
- Masahito Morita
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
- Department of Health Sciences of Mind and Body, University of Human Arts and Sciences, Saitama, Japan
| | - Yuri Nishikawa
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
- Department of Molecular Life Science, Tokai University School of Medicine, Kanagawa, Japan
| | - Yudai Tokumasu
- Evolutionary Anthropology Lab, Department of Biological Sciences, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
4
|
T. Zaatar M, Alhakim K, Enayeh M, Tamer R. The transformative power of music: Insights into neuroplasticity, health, and disease. Brain Behav Immun Health 2024; 35:100716. [PMID: 38178844 PMCID: PMC10765015 DOI: 10.1016/j.bbih.2023.100716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Revised: 12/04/2023] [Accepted: 12/08/2023] [Indexed: 01/06/2024] Open
Abstract
Music is a universal language that can elicit profound emotional and cognitive responses. In this literature review, we explore the intricate relationship between music and the brain, from how it is decoded by the nervous system to its therapeutic potential in various disorders. Music engages a diverse network of brain regions and circuits, including sensory-motor processing, cognitive, memory, and emotional components. Music-induced brain network oscillations occur in specific frequency bands, and listening to one's preferred music can grant easier access to these brain functions. Moreover, music training can bring about structural and functional changes in the brain, and studies have shown its positive effects on social bonding, cognitive abilities, and language processing. We also discuss how music therapy can be used to retrain impaired brain circuits in different disorders. Understanding how music affects the brain can open up new avenues for music-based interventions in healthcare, education, and wellbeing.
Collapse
Affiliation(s)
- Muriel T. Zaatar
- Department of Biological and Physical Sciences, American University in Dubai, Dubai, United Arab Emirates
| | | | | | | |
Collapse
|
5
|
Diveica V, Riedel MC, Salo T, Laird AR, Jackson RL, Binney RJ. Graded functional organization in the left inferior frontal gyrus: evidence from task-free and task-based functional connectivity. Cereb Cortex 2023; 33:11384-11399. [PMID: 37833772 PMCID: PMC10690868 DOI: 10.1093/cercor/bhad373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Revised: 08/17/2023] [Accepted: 09/18/2023] [Indexed: 10/15/2023] Open
Abstract
The left inferior frontal gyrus has been ascribed key roles in numerous cognitive domains, such as language and executive function. However, its functional organization is unclear. Possibilities include a singular domain-general function, or multiple functions that can be mapped onto distinct subregions. Furthermore, spatial transition in function may be either abrupt or graded. The present study explored the topographical organization of the left inferior frontal gyrus using a bimodal data-driven approach. We extracted functional connectivity gradients from (i) resting-state fMRI time-series and (ii) coactivation patterns derived meta-analytically from heterogenous sets of task data. We then sought to characterize the functional connectivity differences underpinning these gradients with seed-based resting-state functional connectivity, meta-analytic coactivation modeling and functional decoding analyses. Both analytic approaches converged on graded functional connectivity changes along 2 main organizational axes. An anterior-posterior gradient shifted from being preferentially associated with high-level control networks (anterior functional connectivity) to being more tightly coupled with perceptually driven networks (posterior). A second dorsal-ventral axis was characterized by higher connectivity with domain-general control networks on one hand (dorsal functional connectivity), and with the semantic network, on the other (ventral). These results provide novel insights into an overarching graded functional organization of the functional connectivity that explains its role in multiple cognitive domains.
Collapse
Affiliation(s)
- Veronica Diveica
- Department of Psychology & Cognitive Neuroscience Institute, Bangor University, Bangor, Wales LL57 2AS, United Kingdom
- Department of Neurology and Neurosurgery & Montreal Neurological Institute, McGill University, Montreal, QC H3A 2B4, Canada
| | - Michael C Riedel
- Department of Physics, Florida International University, Miami, FL 33199, United States
| | - Taylor Salo
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, 19104, United States
| | - Angela R Laird
- Department of Physics, Florida International University, Miami, FL 33199, United States
| | - Rebecca L Jackson
- Department of Psychology & York Biomedical Research Institute, University of York, York, YO10 5DD, United Kingdom
| | - Richard J Binney
- Department of Psychology & Cognitive Neuroscience Institute, Bangor University, Bangor, Wales LL57 2AS, United Kingdom
| |
Collapse
|
6
|
McCarty MJ, Murphy E, Scherschligt X, Woolnough O, Morse CW, Snyder K, Mahon BZ, Tandon N. Intraoperative cortical localization of music and language reveals signatures of structural complexity in posterior temporal cortex. iScience 2023; 26:107223. [PMID: 37485361 PMCID: PMC10362292 DOI: 10.1016/j.isci.2023.107223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2022] [Revised: 06/01/2023] [Accepted: 06/22/2023] [Indexed: 07/25/2023] Open
Abstract
Language and music involve the productive combination of basic units into structures. It remains unclear whether brain regions sensitive to linguistic and musical structure are co-localized. We report an intraoperative awake craniotomy in which a left-hemispheric language-dominant professional musician underwent cortical stimulation mapping (CSM) and electrocorticography of music and language perception and production during repetition tasks. Musical sequences were melodic or amelodic, and differed in algorithmic compressibility (Lempel-Ziv complexity). Auditory recordings of sentences differed in syntactic complexity (single vs. multiple phrasal embeddings). CSM of posterior superior temporal gyrus (pSTG) disrupted music perception and production, along with speech production. pSTG and posterior middle temporal gyrus (pMTG) activated for language and music (broadband gamma; 70-150 Hz). pMTG activity was modulated by musical complexity, while pSTG activity was modulated by syntactic complexity. This points to shared resources for music and language comprehension, but distinct neural signatures for the processing of domain-specific structural features.
Collapse
Affiliation(s)
- Meredith J. McCarty
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Elliot Murphy
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Xavier Scherschligt
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Oscar Woolnough
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Cale W. Morse
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Kathryn Snyder
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
| | - Bradford Z. Mahon
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, USA
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX 77030, USA
| |
Collapse
|
7
|
Chen X, Affourtit J, Ryskin R, Regev TI, Norman-Haignere S, Jouravlev O, Malik-Moraleda S, Kean H, Varley R, Fedorenko E. The human language system, including its inferior frontal component in "Broca's area," does not support music perception. Cereb Cortex 2023; 33:7904-7929. [PMID: 37005063 PMCID: PMC10505454 DOI: 10.1093/cercor/bhad087] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Revised: 01/02/2023] [Accepted: 01/03/2023] [Indexed: 04/04/2023] Open
Abstract
Language and music are two human-unique capacities whose relationship remains debated. Some have argued for overlap in processing mechanisms, especially for structure processing. Such claims often concern the inferior frontal component of the language system located within "Broca's area." However, others have failed to find overlap. Using a robust individual-subject fMRI approach, we examined the responses of language brain regions to music stimuli, and probed the musical abilities of individuals with severe aphasia. Across 4 experiments, we obtained a clear answer: music perception does not engage the language system, and judgments about music structure are possible even in the presence of severe damage to the language network. In particular, the language regions' responses to music are generally low, often below the fixation baseline, and never exceed responses elicited by nonmusic auditory conditions, like animal sounds. Furthermore, the language regions are not sensitive to music structure: they show low responses to both intact and structure-scrambled music, and to melodies with vs. without structural violations. Finally, in line with past patient investigations, individuals with aphasia, who cannot judge sentence grammaticality, perform well on melody well-formedness judgments. Thus, the mechanisms that process structure in language do not appear to process music, including music syntax.
Collapse
Affiliation(s)
- Xuanyi Chen
- Department of Cognitive Sciences, Rice University, TX 77005, United States
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Josef Affourtit
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rachel Ryskin
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive & Information Sciences, University of California, Merced, Merced, CA 95343, United States
| | - Tamar I Regev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Samuel Norman-Haignere
- Department of Biostatistics & Computational Biology, University of Rochester Medical Center, Rochester, NY, United States
- Department of Neuroscience, University of Rochester Medical Center, Rochester, NY, United States
- Department of Biomedical Engineering, University of Rochester, Rochester, NY, United States
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, United States
| | - Olessia Jouravlev
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- Department of Cognitive Science, Carleton University, Ottawa, ON, Canada
| | - Saima Malik-Moraleda
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| | - Hope Kean
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
| | - Rosemary Varley
- Psychology & Language Sciences, UCL, London, WCN1 1PF, United Kingdom
| | - Evelina Fedorenko
- Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, United States
- McGovern Institute for Brain Research, MIT, Cambridge, MA 02139, United States
- The Program in Speech and Hearing Bioscience and Technology, Harvard University, Cambridge, MA 02138, United States
| |
Collapse
|
8
|
Liu J, Hilton CB, Bergelson E, Mehr SA. Language experience predicts music processing in a half-million speakers of fifty-four languages. Curr Biol 2023; 33:1916-1925.e4. [PMID: 37105166 PMCID: PMC10306420 DOI: 10.1016/j.cub.2023.03.067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 02/08/2023] [Accepted: 03/23/2023] [Indexed: 04/29/2023]
Abstract
Tonal languages differ from other languages in their use of pitch (tones) to distinguish words. Lifelong experience speaking and hearing tonal languages has been argued to shape auditory processing in ways that generalize beyond the perception of linguistic pitch to the perception of pitch in other domains like music. We conducted a meta-analysis of prior studies testing this idea, finding moderate evidence supporting it. But prior studies were limited by mostly small sample sizes representing a small number of languages and countries, making it challenging to disentangle the effects of linguistic experience from variability in music training, cultural differences, and other potential confounds. To address these issues, we used web-based citizen science to assess music perception skill on a global scale in 34,034 native speakers of 19 tonal languages (e.g., Mandarin, Yoruba). We compared their performance to 459,066 native speakers of other languages, including 6 pitch-accented (e.g., Japanese) and 29 non-tonal languages (e.g., Hungarian). Whether or not participants had taken music lessons, native speakers of all 19 tonal languages had an improved ability to discriminate musical melodies on average, relative to speakers of non-tonal languages. But this improvement came with a trade-off: tonal language speakers were also worse at processing the musical beat. The results, which held across native speakers of many diverse languages and were robust to geographic and demographic variation, demonstrate that linguistic experience shapes music perception, with implications for relations between music, language, and culture in the human mind.
Collapse
Affiliation(s)
- Jingxuan Liu
- Columbia Business School, Columbia University, 665 W 130th Street, New York, NY 10027, USA; Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA.
| | - Courtney B Hilton
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| | - Elika Bergelson
- Department of Psychology & Neuroscience, Duke University, 417 Chapel Drive, Durham, NC 27708, USA
| | - Samuel A Mehr
- Yale Child Study Center, Yale University, 300 George Street #900, New Haven, CT 06511, USA; School of Psychology, University of Auckland, 23 Symonds Street, Auckland 1010, New Zealand.
| |
Collapse
|
9
|
Nitin R, Gustavson DE, Aaron AS, Boorom OA, Bush CT, Wiens N, Vaughan C, Persici V, Blain SD, Soman U, Hambrick DZ, Camarata SM, McAuley JD, Gordon RL. Exploring individual differences in musical rhythm and grammar skills in school-aged children with typically developing language. Sci Rep 2023; 13:2201. [PMID: 36750727 PMCID: PMC9905575 DOI: 10.1038/s41598-022-21902-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 10/05/2022] [Indexed: 02/09/2023] Open
Abstract
A growing number of studies have shown a connection between rhythmic processing and language skill. It has been proposed that domain-general rhythm abilities might help children to tap into the rhythm of speech (prosody), cueing them to prosodic markers of grammatical (syntactic) information during language acquisition, thus underlying the observed correlations between rhythm and language. Working memory processes common to task demands for musical rhythm discrimination and spoken language paradigms are another possible source of individual variance observed in musical rhythm and language abilities. To investigate the nature of the relationship between musical rhythm and expressive grammar skills, we adopted an individual differences approach in N = 132 elementary school-aged children ages 5-7, with typical language development, and investigated prosodic perception and working memory skills as possible mediators. Aligning with the literature, musical rhythm was correlated with expressive grammar performance (r = 0.41, p < 0.001). Moreover, musical rhythm predicted mastery of complex syntax items (r = 0.26, p = 0.003), suggesting a privileged role of hierarchical processing shared between musical rhythm processing and children's acquisition of complex syntactic structures. These relationships between rhythm and grammatical skills were not mediated by prosodic perception, working memory, or non-verbal IQ; instead, we uncovered a robust direct effect of musical rhythm perception on grammatical task performance. Future work should focus on possible biological endophenotypes and genetic influences underlying this relationship.
Collapse
Affiliation(s)
- Rachana Nitin
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Daniel E Gustavson
- Department of Medicine, Division of Genetic Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioural Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Allison S Aaron
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA, USA
| | - Olivia A Boorom
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS, USA
| | - Catherine T Bush
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Natalie Wiens
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Ascension Via Christi St Teresa Hospital, Wichita, KS, USA
| | - Chloe Vaughan
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Valentina Persici
- Department of Human Sciences, University of Verona, Verona, Italy
- Department of Psychology, Università degli Studi di Milano - Bicocca, Milan, Italy
- Department of Psychiatry, University of Michigan-Ann Arbor, Ann Arbor, MI, USA
| | - Scott D Blain
- Department of Psychiatry, University of Michigan-Ann Arbor, Ann Arbor, MI, USA
| | - Uma Soman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Communication Disorders and Deaf Education, Fontbonne University, St. Louis, MO, USA
| | - David Z Hambrick
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| | - Stephen M Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - J Devin McAuley
- Department of Psychology, Michigan State University, East Lansing, MI, USA
| | - Reyna L Gordon
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
- Department of Otolaryngology - Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA.
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA.
- Department of Psychology, Vanderbilt University, Nashville, TN, USA.
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA.
| |
Collapse
|
10
|
Diveica V, Riedel MC, Salo T, Laird AR, Jackson RL, Binney RJ. Graded functional organisation in the left inferior frontal gyrus: evidence from task-free and task-based functional connectivity. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.02.526818. [PMID: 36778322 PMCID: PMC9915604 DOI: 10.1101/2023.02.02.526818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
The left inferior frontal gyrus (LIFG) has been ascribed key roles in numerous cognitive domains, including language, executive function and social cognition. However, its functional organisation, and how the specific areas implicated in these cognitive domains relate to each other, is unclear. Possibilities include that the LIFG underpins a domain-general function or, alternatively, that it is characterized by functional differentiation, which might occur in either a discrete or a graded pattern. The aim of the present study was to explore the topographical organisation of the LIFG using a bimodal data-driven approach. To this end, we extracted functional connectivity (FC) gradients from 1) the resting-state fMRI time-series of 150 participants (77 female), and 2) patterns of co-activation derived meta-analytically from task data across a diverse set of cognitive domains. We then sought to characterize the FC differences driving these gradients with seed-based resting-state FC and meta-analytic co-activation modelling analyses. Both analytic approaches converged on an FC profile that shifted in a graded fashion along two main organisational axes. An anterior-posterior gradient shifted from being preferentially associated with high-level control networks (anterior LIFG) to being more tightly coupled with perceptually-driven networks (posterior). A second dorsal-ventral axis was characterized by higher connectivity with domain-general control networks on one hand (dorsal LIFG), and with the semantic network, on the other (ventral). These results provide novel insights into a graded functional organisation of the LIFG underpinning both task-free and task-constrained mental states, and suggest that the LIFG is an interface between distinct large-scale functional networks.
Collapse
Affiliation(s)
- Veronica Diveica
- Cognitive Neuroscience Institute, Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Wales, UK
| | - Michael C. Riedel
- Department of Physics, Florida International University, Miami, FL, USA
| | - Taylor Salo
- Department of Psychiatry, University of Pennsylvania, Philadelphia, PA, USA
| | - Angela R. Laird
- Department of Physics, Florida International University, Miami, FL, USA
| | - Rebecca L. Jackson
- Department of Psychology & York Biomedical Research Institute, University of York, UK
| | - Richard J. Binney
- Cognitive Neuroscience Institute, Department of Psychology, School of Human and Behavioural Sciences, Bangor University, Wales, UK
| |
Collapse
|
11
|
Dedhe AM, Clatterbuck H, Piantadosi ST, Cantlon JF. Origins of Hierarchical Logical Reasoning. Cogn Sci 2023; 47:e13250. [PMID: 36739520 PMCID: PMC11057913 DOI: 10.1111/cogs.13250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Revised: 12/21/2022] [Accepted: 01/06/2023] [Indexed: 02/06/2023]
Abstract
Hierarchical cognitive mechanisms underlie sophisticated behaviors, including language, music, mathematics, tool-use, and theory of mind. The origins of hierarchical logical reasoning have long been, and continue to be, an important puzzle for cognitive science. Prior approaches to hierarchical logical reasoning have often failed to distinguish between observable hierarchical behavior and unobservable hierarchical cognitive mechanisms. Furthermore, past research has been largely methodologically restricted to passive recognition tasks as compared to active generation tasks that are stronger tests of hierarchical rules. We argue that it is necessary to implement learning studies in humans, non-human species, and machines that are analyzed with formal models comparing the contribution of different cognitive mechanisms implicated in the generation of hierarchical behavior. These studies are critical to advance theories in the domains of recursion, rule-learning, symbolic reasoning, and the potentially uniquely human cognitive origins of hierarchical logical reasoning.
Collapse
Affiliation(s)
- Abhishek M. Dedhe
- Department of Psychology, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Carnegie Mellon University
| | | | | | - Jessica F. Cantlon
- Department of Psychology, Carnegie Mellon University
- Center for the Neural Basis of Cognition, Carnegie Mellon University
| |
Collapse
|
12
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
13
|
Asano R, Boeckx C, Fujita K. Moving beyond domain-specific vs. domain-general options in cognitive neuroscience. Cortex 2022; 154:259-268. [DOI: 10.1016/j.cortex.2022.05.004] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2021] [Revised: 04/07/2022] [Accepted: 05/11/2022] [Indexed: 11/26/2022]
|
14
|
Exploring the Effects of Brain Stimulation on Musical Taste: tDCS on the Left Dorso-Lateral Prefrontal Cortex—A Null Result. Brain Sci 2022; 12:brainsci12040467. [PMID: 35447998 PMCID: PMC9030245 DOI: 10.3390/brainsci12040467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/21/2022] [Accepted: 03/26/2022] [Indexed: 02/04/2023] Open
Abstract
Humans are the only species capable of experiencing pleasure from esthetic stimuli, such as art and music. Neuroimaging evidence suggests that the left dorsolateral prefrontal cortex (DLPFC) plays a critical role in esthetic judgments, both in music and in visual art. In the last decade, non-invasive brain stimulation (NIBS) has been increasingly employed to shed light on the causal role of different brain regions contributing to esthetic appreciation. In Experiment #1, musician (N = 20) and non-musician (N = 20) participants were required to judge musical stimuli in terms of “liking” and “emotions”. No significant differences between groups were found, although musicians were slower than non-musicians in both tasks, likely indicating a more analytic judgment, due to musical expertise. Experiment #2 investigated the putative causal role of the left dorsolateral pre-frontal cortex (DLPFC) in the esthetic appreciation of music, by means of transcranial direct current stimulation (tDCS). Unlike previous findings in visual art, no significant effects of tDCS were found, suggesting that stimulating the left DLPFC is not enough to affect the esthetic appreciation of music, although this conclusion is based on negative evidence,.
Collapse
|
15
|
Williams JA, Margulis EH, Nastase SA, Chen J, Hasson U, Norman KA, Baldassano C. High-Order Areas and Auditory Cortex Both Represent the High-Level Event Structure of Music. J Cogn Neurosci 2022; 34:699-714. [PMID: 35015874 DOI: 10.1162/jocn_a_01815] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Recent fMRI studies of event segmentation have found that default mode regions represent high-level event structure during movie watching. In these regions, neural patterns are relatively stable during events and shift at event boundaries. Music, like narratives, contains hierarchical event structure (e.g., sections are composed of phrases). Here, we tested the hypothesis that brain activity patterns in default mode regions reflect the high-level event structure of music. We used fMRI to record brain activity from 25 participants (male and female) as they listened to a continuous playlist of 16 musical excerpts and additionally collected annotations for these excerpts by asking a separate group of participants to mark when meaningful changes occurred in each one. We then identified temporal boundaries between stable patterns of brain activity using a hidden Markov model and compared the location of the model boundaries to the location of the human annotations. We identified multiple brain regions with significant matches to the observer-identified boundaries, including auditory cortex, medial pFC, parietal cortex, and angular gyrus. From these results, we conclude that both higher-order and sensory areas contain information relating to the high-level event structure of music. Moreover, the higher-order areas in this study overlap with areas found in previous studies of event perception in movies and audio narratives, including regions in the default mode network.
Collapse
|
16
|
Bianco R, Novembre G, Ringer H, Kohler N, Keller PE, Villringer A, Sammler D. Lateral Prefrontal Cortex Is a Hub for Music Production from Structural Rules to Movements. Cereb Cortex 2021; 32:3878-3895. [PMID: 34965579 PMCID: PMC9476625 DOI: 10.1093/cercor/bhab454] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 11/08/2021] [Accepted: 11/09/2021] [Indexed: 11/13/2022] Open
Abstract
Complex sequential behaviors, such as speaking or playing music, entail flexible rule-based chaining of single acts. However, it remains unclear how the brain translates abstract structural rules into movements. We combined music production with multimodal neuroimaging to dissociate high-level structural and low-level motor planning. Pianists played novel musical chord sequences on a muted MR-compatible piano by imitating a model hand on screen. Chord sequences were manipulated in terms of musical harmony and context length to assess structural planning, and in terms of fingers used for playing to assess motor planning. A model of probabilistic sequence processing confirmed temporally extended dependencies between chords, as opposed to local dependencies between movements. Violations of structural plans activated the left inferior frontal and middle temporal gyrus, and the fractional anisotropy of the ventral pathway connecting these two regions positively predicted behavioral measures of structural planning. A bilateral frontoparietal network was instead activated by violations of motor plans. Both structural and motor networks converged in lateral prefrontal cortex, with anterior regions contributing to musical structure building, and posterior areas to movement planning. These results establish a promising approach to study sequence production at different levels of action representation.
Collapse
Affiliation(s)
- Roberta Bianco
- UCL Ear Institute, University College London, London WC1X 8EE, UK.,Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Giacomo Novembre
- Neuroscience of Perception and Action Lab, Italian Institute of Technology (IIT), Rome 00161, Italy
| | - Hanna Ringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, University of Leipzig, Leipzig 04109, Germany
| | - Natalie Kohler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| | - Peter E Keller
- Department of Clinical Medicine, Center for Music in the Brain, Aarhus University, Aarhus 8000, Denmark.,The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Sydney, NSW 2751, Australia
| | - Arno Villringer
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Daniela Sammler
- Otto Hahn Research Group Neural Bases of Intonation in Speech and Music, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Frankfurt am Main 60322, Germany
| |
Collapse
|