1
|
Nayak S, Coleman PL, Ladányi E, Nitin R, Gustavson DE, Fisher SE, Magne CL, Gordon RL. The Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) Framework for Understanding Musicality-Language Links Across the Lifespan. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:615-664. [PMID: 36742012 PMCID: PMC9893227 DOI: 10.1162/nol_a_00079] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 08/08/2022] [Indexed: 04/18/2023]
Abstract
Using individual differences approaches, a growing body of literature finds positive associations between musicality and language-related abilities, complementing prior findings of links between musical training and language skills. Despite these associations, musicality has been often overlooked in mainstream models of individual differences in language acquisition and development. To better understand the biological basis of these individual differences, we propose the Musical Abilities, Pleiotropy, Language, and Environment (MAPLE) framework. This novel integrative framework posits that musical and language-related abilities likely share some common genetic architecture (i.e., genetic pleiotropy) in addition to some degree of overlapping neural endophenotypes, and genetic influences on musically and linguistically enriched environments. Drawing upon recent advances in genomic methodologies for unraveling pleiotropy, we outline testable predictions for future research on language development and how its underlying neurobiological substrates may be supported by genetic pleiotropy with musicality. In support of the MAPLE framework, we review and discuss findings from over seventy behavioral and neural studies, highlighting that musicality is robustly associated with individual differences in a range of speech-language skills required for communication and development. These include speech perception-in-noise, prosodic perception, morphosyntactic skills, phonological skills, reading skills, and aspects of second/foreign language learning. Overall, the current work provides a clear agenda and framework for studying musicality-language links using individual differences approaches, with an emphasis on leveraging advances in the genomics of complex musicality and language traits.
Collapse
Affiliation(s)
- Srishti Nayak
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| | - Peyton L. Coleman
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Enikő Ladányi
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Linguistics, Potsdam University, Potsdam, Germany
| | - Rachana Nitin
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
| | - Daniel E. Gustavson
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Medicine, Vanderbilt University Medical Center, Nashville, TN, USA
- Institute for Behavioral Genetics, University of Colorado Boulder, Boulder, CO, USA
| | - Simon E. Fisher
- Language and Genetics Department, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
- Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| | - Cyrille L. Magne
- Department of Psychology, Middle Tennessee State University, Murfreesboro, TN, USA
- PhD Program in Literacy Studies, Middle Tennessee State University, Murfreesboro, TN, USA
| | - Reyna L. Gordon
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Genetics Institute, Vanderbilt University Medical Center, Nashville, TN, USA
- Curb Center for Art, Enterprise, and Public Policy, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, TN, USA
- Vanderbilt University School of Medicine, Vanderbilt University, TN, USA
| |
Collapse
|
2
|
Jekiel M, Malarski K. Musical Hearing and Musical Experience in Second Language English Vowel Acquisition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1666-1682. [PMID: 33831309 DOI: 10.1044/2021_jslhr-19-00253] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Former studies suggested that music perception can help produce certain accentual features in the first and second language (L2), such as intonational contours. What was missing in many of these studies was the identification of the exact relationship between specific music perception skills and the production of different accentual features in a foreign language. Our aim was to verify whether empirically tested musical hearing skills can be related to the acquisition of English vowels by learners of English as an L2 before and after a formal accent training course. Method Fifty adult Polish speakers of L2 English were tested before and after a two-semester accent training in order to observe the effect of musical hearing on the acquisition of English vowels. Their L2 English vowel formant contours produced in consonant-vowel-consonant context were compared with the target General British vowels produced by their pronunciation teachers. We juxtaposed these results with their musical hearing test scores and self-reported musical experience to observe a possible relationship between successful L2 vowel acquisition and musical aptitude. Results Preexisting rhythmic memory was reported as a significant predictor before training, while musical experience was reported as a significant factor in the production of more native-like L2 vowels after training. We also observed that not all vowels were equally acquired or affected by musical hearing or musical experience. The strongest estimate we observed was the closeness to model before training, suggesting that learners who already managed to acquire some features of a native-like accent were also more successful after training. Conclusions Our results are revealing in two aspects. First, the learners' former proficiency in L2 pronunciation is the most robust predictor in acquiring a native-like accent. Second, there is a potential relationship between rhythmic memory and L2 vowel acquisition before training, as well as years of musical experience after training, suggesting that specific musical skills and music practice can be an asset in learning a foreign language accent.
Collapse
Affiliation(s)
- Mateusz Jekiel
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| | - Kamil Malarski
- Faculty of English, Adam Mickiewicz University, Poznań, Poland
| |
Collapse
|
3
|
The Musical Ear Test: Norms and correlates from a large sample of Canadian undergraduates. Behav Res Methods 2021; 53:2007-2024. [PMID: 33704673 DOI: 10.3758/s13428-020-01528-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/16/2020] [Indexed: 02/06/2023]
Abstract
We sought to establish norms and correlates for the Musical Ear Test (MET), an objective test of musical ability. A large sample of undergraduates at a Canadian university (N > 500) took the 20-min test, which provided a Total score as well as separate scores for its Melody and Rhythm subtests. On each trial, listeners judged whether standard and comparison auditory sequences were the same or different. Norms were derived as percentiles, Z-scores, and T-scores. The distribution of scores was approximately normal without floor or ceiling effects. There were no gender differences on either subtest or the total score. As expected, scores on both subtests were correlated with performance on a test of immediate recall for nonmusical auditory stimuli (Digit Span Forward). Moreover, as duration of music training increased, so did performance on both subtests, but starting lessons at a younger age was not predictive of better musical abilities. Listeners who spoke a tone language exhibited enhanced performance on the Melody subtest but not on the Rhythm subtest. The MET appears to have adequate psychometric characteristics that make it suitable for researchers who seek to measure musical abilities objectively.
Collapse
|
4
|
Cason N, Marmursztejn M, D'Imperio M, Schön D. Rhythmic Abilities Correlate with L2 Prosody Imitation Abilities in Typologically Different Languages. LANGUAGE AND SPEECH 2020; 63:149-165. [PMID: 30760163 DOI: 10.1177/0023830919826334] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
While many studies have demonstrated the relationship between musical rhythm and speech prosody, this has been rarely addressed in the context of second language (L2) acquisition. Here, we investigated whether musical rhythmic skills and the production of L2 speech prosody are predictive of one another. We tested both musical and linguistic rhythmic competences of 23 native French speakers of L2 English. Participants completed perception and production music and language tests. In the prosody production test, sentences containing trisyllabic words with either a prominence on the first or on the second syllable were heard and had to be reproduced. Participants were less accurate in reproducing penultimate accent placement. Moreover, the accuracy in reproducing phonologically disfavored stress patterns was best predicted by rhythm production abilities. Our results show, for the first time, that better reproduction of musical rhythmic sequences is predictive of a more successful realization of unfamiliar L2 prosody, specifically in terms of stress-accent placement.
Collapse
Affiliation(s)
- Nia Cason
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| | - Muriel Marmursztejn
- Aix-Marseille Univ, CNRS, LPL, Laboratoire Parole et Langage, Aix-en-Provence, France
| | - Mariapaola D'Imperio
- Aix-Marseille Univ, CNRS, LPL, Laboratoire Parole et Langage, Aix-en-Provence, France
| | - Daniele Schön
- Aix Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
5
|
Kachlicka M, Saito K, Tierney A. Successful second language learning is tied to robust domain-general auditory processing and stable neural representation of sound. BRAIN AND LANGUAGE 2019; 192:15-24. [PMID: 30831377 DOI: 10.1016/j.bandl.2019.02.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2019] [Revised: 02/20/2019] [Accepted: 02/21/2019] [Indexed: 06/09/2023]
Abstract
There is a great deal of individual variability in outcome in second language learning, the sources of which are still poorly understood. We hypothesized that individual differences in auditory processing may account for some variability in second language learning. We tested this hypothesis by examining psychoacoustic thresholds, auditory-motor temporal integration, and auditory neural encoding in adult native Polish speakers living in the UK. We found that precise English vowel perception and accurate English grammatical judgment were linked to lower psychoacoustic thresholds, better auditory-motor integration, and more consistent frequency-following responses to sound. Psychoacoustic thresholds and neural sound encoding explained independent variance in vowel perception, suggesting that they are dissociable indexes of sound processing. These results suggest that individual differences in second language acquisition success stem at least in part from domain-general difficulties with auditory perception, and that auditory training could help facilitate language learning in some individuals with specific auditory impairments.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom; Institute for Environmental Design and Engineering, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Kazuya Saito
- Institute of Education, University College London, Gower Street, London WC1E 6BT, United Kingdom
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, United Kingdom.
| |
Collapse
|
6
|
Groenveld G, Burgoyne JA, Sadakata M. I still hear a melody: investigating temporal dynamics of the Speech-to-Song Illusion. PSYCHOLOGICAL RESEARCH 2019; 84:1451-1459. [PMID: 30627768 DOI: 10.1007/s00426-018-1135-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 12/10/2018] [Indexed: 10/27/2022]
Abstract
The Speech-to-Song Illusion (STS) refers to a dramatic shift in our perception of short speech fragments which, when repeatedly presented, may start to sound-like song. Anecdotally, once it is perceived as a song, it is difficult to unhear the melody of a speech fragment, and such temporal dynamics of the STS illusion has theoretical implications. The goal of the current study is to capture this temporal effect. In our experiment, speech fragments that initially did not elicit the STS illusion were manipulated to have increasingly stable F0 contours to strengthen the perceived 'song-likeness' of a fragment. Over the course of trials, the speech fragments with manipulated contours were repeatedly presented within blocks of decreasing, increasing, or random orders of F0 manipulations. Results showed that a presentation order where participants first heard the sentence with the maximum amount of F0 manipulations (decreasing condition) resulted in participants continuously giving higher overall song-like ratings than other presentation orders (increasing or random conditions). Our results thus capture the commonly reported phenomenon that it is hard to 'unhear' the illusion once a speech segment has been perceived as song.
Collapse
Affiliation(s)
- Gerben Groenveld
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands
| | - John Ashley Burgoyne
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands.,Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands
| | - Makiko Sadakata
- Musicology Department, University of Amsterdam, Amsterdam, The Netherlands. .,Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, The Netherlands. .,Artificial Intelligence Department, Radboud University, Nijmegen, The Netherlands.
| |
Collapse
|
7
|
Roncaglia-Denissen MP, Bouwer FL, Honing H. Decision Making Strategy and the Simultaneous Processing of Syntactic Dependencies in Language and Music. Front Psychol 2018; 9:38. [PMID: 29441035 PMCID: PMC5797648 DOI: 10.3389/fpsyg.2018.00038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 01/10/2018] [Indexed: 11/29/2022] Open
Abstract
Despite differences in their function and domain-specific elements, syntactic processing in music and language is believed to share cognitive resources. This study aims to investigate whether the simultaneous processing of language and music share the use of a common syntactic processor or more general attentional resources. To investigate this matter we tested musicians and non-musicians using visually presented sentences and aurally presented melodies containing syntactic local and long-distance dependencies. Accuracy rates and reaction times of participants' responses were collected. In both sentences and melodies, unexpected syntactic anomalies were introduced. This is the first study to address the processing of local and long-distance dependencies in language and music combined while reducing the effect of sensory memory. Participants were instructed to focus on language (language session), music (music session), or both (dual session). In the language session, musicians and non-musicians performed comparably in terms of accuracy rates and reaction times. As expected, groups' differences appeared in the music session, with musicians being more accurate in their responses than non-musicians and only the latter showing an interaction between the accuracy rates for music and language syntax. In the dual session musicians were overall more accurate than non-musicians. However, both groups showed comparable behavior, by displaying an interaction between the accuracy rates for language and music syntax responses. In our study, accuracy rates seem to better capture the interaction between language and music syntax; and this interaction seems to indicate the use of distinct, however, interacting mechanisms as part of decision making strategy. This interaction seems to be subject of an increase of attentional load and domain proficiency. Our study contributes to the long-lasting debate about the commonalities between language and music by providing evidence for their interaction at a more domain-general level.
Collapse
Affiliation(s)
- M. P. Roncaglia-Denissen
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Fleur L. Bouwer
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| | - Henkjan Honing
- Institute for Logic, Language and Computation, University of Amsterdam, Amsterdam, Netherlands
- Amsterdam Brain and Cognition, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
8
|
Tierney A, White-Schwoch T, MacLean J, Kraus N. Individual Differences in Rhythm Skills: Links with Neural Consistency and Linguistic Ability. J Cogn Neurosci 2017; 29:855-868. [PMID: 28129066 DOI: 10.1162/jocn_a_01092] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Durational patterns provide cues to linguistic structure, thus so variations in rhythm skills may have consequences for language development. Understanding individual differences in rhythm skills, therefore, could help explain variability in language abilities across the population. We investigated the neural foundations of rhythmic proficiency and its relation to language skills in young adults. We hypothesized that rhythmic abilities can be characterized by at least two constructs, which are tied to independent language abilities and neural profiles. Specifically, we hypothesized that rhythm skills that require integration of information across time rely upon the consistency of slow, low-frequency auditory processing, which we measured using the evoked cortical response. On the other hand, we hypothesized that rhythm skills that require fine temporal precision rely upon the consistency of fast, higher-frequency auditory processing, which we measured using the frequency-following response. Performance on rhythm tests aligned with two constructs: rhythm sequencing and synchronization. Rhythm sequencing and synchronization were linked to the consistency of slow cortical and fast frequency-following responses, respectively. Furthermore, whereas rhythm sequencing ability was linked to verbal memory and reading, synchronization ability was linked only to nonverbal auditory temporal processing. Thus, rhythm perception at different time scales reflects distinct abilities, which rely on distinct auditory neural resources. In young adults, slow rhythmic processing makes the more extensive contribution to language skills.
Collapse
|