1
|
Cao GW. Phonetic Dissimilarity and L2 Category Formation in L2 Accommodation. LANGUAGE AND SPEECH 2024; 67:301-345. [PMID: 37528758 DOI: 10.1177/00238309231182967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/03/2023]
Abstract
Many studies of speech accommodation focus on native speakers with different dialects, whereas only a limited number of studies work on L2 speakers' accommodation and discuss theories for second language (L2) accommodation. This paper aimed to fill the theoretical gap by integrating the revised speech learning model (SLM) with the exemplar-based models for L2 speech accommodation. A total of 19 Cantonese-English bilingual speakers completed map tasks with English speakers of Received Pronunciation and General American English in two separate experiments. Their pronunciations of THOUGHT and PATH vowels, and fricatives [z] and [θ] were examined before, during, and after the map tasks. The role of phonetic dissimilarity in L2 accommodation and L2 category formation in the revised SLM (SLM-r) were tested. First, the results suggested that global phonetic dissimilarity cannot predict Hong Kong English (HKE) speakers' accommodation patterns. Instead, the segment-specific phonetic dissimilarity between participants and interlocutors was found to be positively correlated with the participants' degree of accommodation. In addition, HKE speakers who did not form a new L2 category of [z] were found to significantly accommodate toward their interlocutor, suggesting that L2 accommodation might not be constrained by phonological category. An integrated exemplar model for L2 accommodation is proposed to explain these findings.
Collapse
Affiliation(s)
- Grace Wenling Cao
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong
| |
Collapse
|
2
|
Hutchinson AE. Individual variability and the effect of personality on non-native speech shadowing. JASA EXPRESS LETTERS 2022; 2:065203. [PMID: 36154159 DOI: 10.1121/10.0011753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The present study examines whether personality traits are predictive of success in non-native speech shadowing. Seventy-four monolingual native speakers of English shadowed French words containing high rounded vowels /y/ and /u/ produced by a native French model talker and provided information about their personality through a Big Five Inventory questionnaire. Acoustic analyses support the idea that some personality traits predicted the degree of similarity between the talkers and the model. In this case, shadowed productions by talkers who had higher scores in extraversion and neuroticism were significantly more similar to the model than those who had lower scores.
Collapse
Affiliation(s)
- Amy E Hutchinson
- Department of Linguistics, Purdue University, West Lafayette, Indiana 47907, USA
| |
Collapse
|
3
|
Cargnelutti E, Tomasino B, Fabbro F. Effects of Linguistic Distance on Second Language Brain Activations in Bilinguals: An Exploratory Coordinate-Based Meta-Analysis. Front Hum Neurosci 2022; 15:744489. [PMID: 35069147 PMCID: PMC8770833 DOI: 10.3389/fnhum.2021.744489] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2021] [Accepted: 11/29/2021] [Indexed: 11/29/2022] Open
Abstract
In this quantitative meta-analysis, we used the activation likelihood estimation (ALE) approach to address the effects of linguistic distance between first (L1) and second (L2) languages on language-related brain activations. In particular, we investigated how L2-related networks may change in response to linguistic distance from L1. Thus, we examined L2 brain activations in two groups of participants with English as L2 and either (i) a European language (European group, n = 13 studies) or (ii) Chinese (Chinese group, n = 18 studies) as L1. We further explored the modulatory effect of age of appropriation (AoA) and proficiency of L2. We found that, irrespective of L1-L2 distance-and to an extent-irrespective of L2 proficiency, L2 recruits brain areas supporting higher-order cognitive functions (e.g., cognitive control), although with group-specific differences (e.g., the insula region in the European group and the frontal cortex in the Chinese group). The Chinese group also selectively activated the parietal lobe, but this did not occur in the subgroup with high L2 proficiency. These preliminary results highlight the relevance of linguistic distance and call for future research to generalize findings to other language pairs and shed further light on the interaction between linguistic distance, AoA, and proficiency of L2.
Collapse
Affiliation(s)
- Elisa Cargnelutti
- Dipartimento/Unità Operativa Pasian di Prato, Scientific Institute, IRCCS E. Medea, Udine, Italy
| | - Barbara Tomasino
- Dipartimento/Unità Operativa Pasian di Prato, Scientific Institute, IRCCS E. Medea, Udine, Italy
| | - Franco Fabbro
- Cognitive Neuroscience Laboratory, Department of Languages, Literature, Communication, Education, and Society, University of Udine, Udine, Italy
- Institute of Mechanical Intelligence, Scuola Superiore Sant’Anna, Pisa, Italy
| |
Collapse
|
4
|
Vocal Learning and Behaviors in Birds and Human Bilinguals: Parallels, Divergences and Directions for Research. LANGUAGES 2021. [DOI: 10.3390/languages7010005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Comparisons between the communication systems of humans and animals are instrumental in contextualizing speech and language into an evolutionary and biological framework and for illuminating mechanisms of human communication. As a complement to previous work that compares developmental vocal learning and use among humans and songbirds, in this article we highlight phenomena associated with vocal learning subsequent to the development of primary vocalizations (i.e., the primary language (L1) in humans and the primary song (S1) in songbirds). By framing avian “second-song” (S2) learning and use within the human second-language (L2) context, we lay the groundwork for a scientifically-rich dialogue between disciplines. We begin by summarizing basic birdsong research, focusing on how songs are learned and on constraints on learning. We then consider commonalities in vocal learning across humans and birds, in particular the timing and neural mechanisms of learning, variability of input, and variability of outcomes. For S2 and L2 learning outcomes, we address the respective roles of age, entrenchment, and social interactions. We proceed to orient current and future birdsong inquiry around foundational features of human bilingualism: L1 effects on the L2, L1 attrition, and L1<–>L2 switching. Throughout, we highlight characteristics that are shared across species as well as the need for caution in interpreting birdsong research. Thus, from multiple instructive perspectives, our interdisciplinary dialogue sheds light on biological and experiential principles of L2 acquisition that are informed by birdsong research, and leverages well-studied characteristics of bilingualism in order to clarify, contextualize, and further explore S2 learning and use in songbirds.
Collapse
|
5
|
Stipancic KL, Kuo YL, Miller A, Ventresca HM, Sternad D, Kimberley TJ, Green JR. The effects of continuous oromotor activity on speech motor learning: speech biomechanics and neurophysiologic correlates. Exp Brain Res 2021; 239:3487-3505. [PMID: 34524491 PMCID: PMC8599312 DOI: 10.1007/s00221-021-06206-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Accepted: 08/25/2021] [Indexed: 11/26/2022]
Abstract
Sustained limb motor activity has been used as a therapeutic tool for improving rehabilitation outcomes and is thought to be mediated by neuroplastic changes associated with activity-induced cortical excitability. Although prior research has reported enhancing effects of continuous chewing and swallowing activity on learning, the potential beneficial effects of sustained oromotor activity on speech improvements is not well-documented. This exploratory study was designed to examine the effects of continuous oromotor activity on subsequent speech learning. Twenty neurologically healthy young adults engaged in periods of continuous chewing and speech after which they completed a novel speech motor learning task. The motor learning task was designed to elicit improvements in accuracy and efficiency of speech performance across repetitions of eight-syllable nonwords. In addition, transcranial magnetic stimulation was used to measure the cortical silent period (cSP) of the lip motor cortex before and after the periods of continuous oromotor behaviors. All repetitions of the nonword task were recorded acoustically and kinematically using a three-dimensional motion capture system. Productions were analyzed for accuracy and duration, as well as lip movement distance and speed. A control condition estimated baseline improvement rates in speech performance. Results revealed improved speech performance following 10 min of chewing. In contrast, speech performance following 10 min of continuous speech was degraded. There was no change in the cSP as a result of either oromotor activity. The clinical implications of these findings are discussed in the context of speech rehabilitation and neuromodulation.
Collapse
Affiliation(s)
- Kaila L Stipancic
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY, USA
| | - Yi-Ling Kuo
- Department of Physical Therapy, Upstate Medical University, Syracuse, NY, USA
| | - Amanda Miller
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA, USA
| | - Hayden M Ventresca
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Building 79/96, 2nd Floor 13th Street, Boston, MA, 02129, USA
| | - Dagmar Sternad
- Department of Biology, Northeastern University, Boston, MA, USA
| | - Teresa J Kimberley
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Building 79/96, 2nd Floor 13th Street, Boston, MA, 02129, USA
| | - Jordan R Green
- Department of Rehabilitation Sciences, MGH Institute of Health Professions, Building 79/96, 2nd Floor 13th Street, Boston, MA, 02129, USA.
| |
Collapse
|
6
|
Guldner S, Nees F, McGettigan C. Vocomotor and Social Brain Networks Work Together to Express Social Traits in Voices. Cereb Cortex 2020; 30:6004-6020. [PMID: 32577719 DOI: 10.1093/cercor/bhaa175] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/08/2020] [Accepted: 05/31/2020] [Indexed: 11/14/2022] Open
Abstract
Voice modulation is important when navigating social interactions-tone of voice in a business negotiation is very different from that used to comfort an upset child. While voluntary vocal behavior relies on a cortical vocomotor network, social voice modulation may require additional social cognitive processing. Using functional magnetic resonance imaging, we investigated the neural basis for social vocal control and whether it involves an interplay of vocal control and social processing networks. Twenty-four healthy adult participants modulated their voice to express social traits along the dimensions of the social trait space (affiliation and competence) or to express body size (control for vocal flexibility). Naïve listener ratings showed that vocal modulations were effective in evoking social trait ratings along the two primary dimensions of the social trait space. Whereas basic vocal modulation engaged the vocomotor network, social voice modulation specifically engaged social processing regions including the medial prefrontal cortex, superior temporal sulcus, and precuneus. Moreover, these regions showed task-relevant modulations in functional connectivity to the left inferior frontal gyrus, a core vocomotor control network area. These findings highlight the impact of the integration of vocal motor control and social information processing for socially meaningful voice modulation.
Collapse
Affiliation(s)
- Stella Guldner
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Graduate School of Economic and Social Sciences, University of Mannheim, Mannheim 68159, Germany.,Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Frauke Nees
- Department of Cognitive and Clinical Neuroscience, Central Institute of Mental Health, Medical Faculty Mannheim, Heidelberg University, Mannheim 68159, Germany.,Institute of Medical Psychology and Medical Sociology, University Medical Center Schleswig Holstein, Kiel University, Kiel 24105, Germany
| | - Carolyn McGettigan
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.,Department of Psychology, Royal Holloway, University of London, Egham TW20 0EX, UK
| |
Collapse
|
7
|
Delogu F, Zheng Y. Beneficial Effects of Musicality on the Development of Productive Phonology Skills in Second Language Acquisition. Front Neurosci 2020; 14:618. [PMID: 32733183 PMCID: PMC7358579 DOI: 10.3389/fnins.2020.00618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 05/19/2020] [Indexed: 11/17/2022] Open
Abstract
Previous studies show beneficial effects of musicality on the acquisition of a second language (L2). While most research focused on perceptual aspects, only few studies investigated the effects of musicality on productive phonology. The present study tested if musicality can predict productive phonological skills in L2 acquisition. Sixty-three students with no previous exposure to Arabic were asked to repeatedly listen to and immediately reproduce short sentences in standard Arabic. Before the sentence reproduction task, they completed an auditory discrimination task in three different between-subjects condition: attentive, in which participants were asked to discriminate phonological variations in the same Arabic sentence that they were asked to reproduce later; non-attentive, in which participants were asked to detect beeps in the same Arabic sentences without paying attention to their phonological content; and no-exposure, in which participants performed the discrimination task in another language (Serbian). The first, third and seventh reproductions of each participant were rated for intelligibility, accent, and syllabic errors by two independent evaluators, both native speakers of Arabic. Primary results showed that the intelligibility of the reproduced sentences was higher in participants with high musicality scores in the Advanced Measures of Music Audiation. Moreover, the intelligibility of sentences produced by highly musical participants improved more over time than the intelligibility of participants with lower musicality scores. Previous exposure to the Arabic sentence was beneficial in both the attentive and non-attentive conditions. Our results support the idea that musicality can have effects on productive skills even in the very first stages of L2 acquisition.
Collapse
Affiliation(s)
- Franco Delogu
- Department of Humanities, Social Sciences and Communication, Lawrence Technological University, Southfield, MI, United States
| | - Yi Zheng
- Department of Psychology, Stony Brook University, Stony Brook, NY, United States
| |
Collapse
|
8
|
Sumathi TA, Spinola O, Singh NC, Chakrabarti B. Perceived Closeness and Autistic Traits Modulate Interpersonal Vocal Communication. Front Psychiatry 2020; 11:50. [PMID: 32180734 PMCID: PMC7059848 DOI: 10.3389/fpsyt.2020.00050] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Accepted: 01/21/2020] [Indexed: 11/29/2022] Open
Abstract
Vocal modulation is a critical component of interpersonal communication. It not only serves as a dynamic and flexible tool for self-expression and linguistic information but also plays a key role in social behavior. Variation in vocal modulation can be driven by individual traits of interlocutors as well as factors relating to the dyad, such as the perceived closeness between interlocutors. In this study we examine both of these sources of variation. At an individual level, we examine the impact of autistic traits, since lack of appropriate vocal modulation has often been associated with Autism Spectrum Disorders. At a dyadic level, we examine the role of perceived closeness between interlocutors on vocal modulation. The study was conducted in three separate samples from India, Italy, and the UK. Articulatory features were extracted from recorded conversations between a total of 85 same-sex pairs of participants, and the articulation space calculated. A larger articulation space corresponds to greater number of spectro-temporal modulations (articulatory variations) sampled by the speaker. Articulation space showed a positive association with interpersonal closeness and a weak negative association with autistic traits. This study thus provides novel insights into individual and dyadic variation that can influence interpersonal vocal communication.
Collapse
Affiliation(s)
- T. A. Sumathi
- National Brain Research Centre, Language, Literacy and Music Laboratory, Manesar, India
| | - Olivia Spinola
- Department of Psychology, Universita` degli Studi di Milano Bicocca, Milan, Italy
- Centre for Autism, School of Psychology & Clinical Language Sciences, University of Reading, Reading, United Kingdom
- Department of Psychology, Sapienza University of Rome, Rome, Italy
| | | | - Bhismadev Chakrabarti
- Centre for Autism, School of Psychology & Clinical Language Sciences, University of Reading, Reading, United Kingdom
- Inter University Centre for Biomedical Research, Mahatma Gandhi University, Kottayam, India
- India Autism Center, Kolkata, India
| |
Collapse
|
9
|
Cargnelutti E, Tomasino B, Fabbro F. Language Brain Representation in Bilinguals With Different Age of Appropriation and Proficiency of the Second Language: A Meta-Analysis of Functional Imaging Studies. Front Hum Neurosci 2019; 13:154. [PMID: 31178707 PMCID: PMC6537025 DOI: 10.3389/fnhum.2019.00154] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2018] [Accepted: 04/23/2019] [Indexed: 12/03/2022] Open
Abstract
Language representation in the bilingual brain is the result of many factors, of which age of appropriation (AoA) and proficiency of the second language (L2) are probably the most studied. Many studies indeed compare early and late bilinguals, although it is not yet clear what the role of the so-called critical period in L2 appropriation is. In this study, we carried out coordinate-based meta-analyses to address this issue and to inspect the role of proficiency in addition to that of AoA. After the preliminary inspection of the early (also very early) and late bilinguals' language networks, we explored the specific activations associated with each language and compared them within and between the groups. Results confirmed that the L2 language brain representation was wider than that associated with L1. This was observed regardless of AoA, although differences were more relevant in the late bilinguals' group. In particular, L2 entailed a greater enrollment of the brain areas devoted to the executive functions, and this was also observed in proficient bilinguals. The early bilinguals displayed many activation clusters as well, which also included the areas involved in cognitive control. Interestingly, these regions activated even in L1 of both early and late bilingual groups, although less consistently. Overall, these findings suggest that bilinguals in general are constantly subjected to cognitive effort to monitor and regulate the language use, although early AoA and high proficiency are likely to reduce this.
Collapse
Affiliation(s)
- Elisa Cargnelutti
- Scientific Institute, IRCCS E. Medea, Dipartimento/Unità Operativa Pasian di Prato, Udine, Italy
| | - Barbara Tomasino
- Scientific Institute, IRCCS E. Medea, Dipartimento/Unità Operativa Pasian di Prato, Udine, Italy
| | - Franco Fabbro
- Cognitive Neuroscience Laboratory, DILL, University of Udine, Udine, Italy
- PERCRO Perceptual Robotics Laboratory, Scuola Superiore Sant’Anna, Pisa, Italy
| |
Collapse
|
10
|
Coumel M, Christiner M, Reiterer SM. Second Language Accent Faking Ability Depends on Musical Abilities, Not on Working Memory. Front Psychol 2019; 10:257. [PMID: 30809178 PMCID: PMC6379457 DOI: 10.3389/fpsyg.2019.00257] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2018] [Accepted: 01/28/2019] [Indexed: 01/25/2023] Open
Abstract
Studies involving direct language imitation tasks have shown that pronunciation ability is related to musical competence and working memory capacities. However, this type of task may measure individual differences in many different linguistic dimensions, other than just phonetic ones. The present study uses an indirect imitation task by asking participants to a fake a foreign accent in order to specifically target individual differences in phonetic abilities. Its aim is to investigate whether musical expertise and working memory capacities relate to phonological awareness (i.e., participants’ implicit knowledge about the phonological system of the target language and its structural properties at the segmental, suprasegmental, and phonotactic levels) as measured on this task. To this end, French native listeners (N = 36) graded how well German native imitators (N = 25) faked a French accent while speaking in German. The imitators also performed a musicality test, a self-assessment of their singing abilities and working memory tasks. The results indicate that the ability to fake a French accent correlates with singing ability and musical perceptual abilities, but not with working memory capacities. This suggests that heightened musical abilities may lead to an increased phonological awareness probably by providing participants with highly efficient memorization strategies and highly accurate long-term phonetic representations of foreign sounds. Comparison with data of previous studies shows that working memory could be implicated in the pronunciation learning process which direct imitation tasks target, whereas musical expertise influences both storing of knowledge and later retrieval here assessed via an indirect imitation task.
Collapse
Affiliation(s)
- Marion Coumel
- Department of Linguistics, University of Vienna, Vienna, Austria.,Department of Psychology, University of Warwick, Coventry, United Kingdom
| | - Markus Christiner
- Department of Linguistics, University of Vienna, Vienna, Austria.,Department of Neurology, Section of Biomagnetism, University of Heidelberg Medical School, Heidelberg, Germany
| | - Susanne Maria Reiterer
- Department of Linguistics, University of Vienna, Vienna, Austria.,Teacher Education Center, University of Vienna, Vienna, Austria
| |
Collapse
|
11
|
Lewis JW, Silberman MJ, Donai JJ, Frum CA, Brefczynski-Lewis JA. Hearing and orally mimicking different acoustic-semantic categories of natural sound engage distinct left hemisphere cortical regions. BRAIN AND LANGUAGE 2018; 183:64-78. [PMID: 29966815 PMCID: PMC6461214 DOI: 10.1016/j.bandl.2018.05.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/08/2017] [Revised: 03/22/2018] [Accepted: 05/06/2018] [Indexed: 05/10/2023]
Abstract
Oral mimicry is thought to represent an essential process for the neurodevelopment of spoken language systems in infants, the evolution of language in hominins, and a process that could possibly aid recovery in stroke patients. Using functional magnetic resonance imaging (fMRI), we previously reported a divergence of auditory cortical pathways mediating perception of specific categories of natural sounds. However, it remained unclear if or how this fundamental sensory organization by the brain might relate to motor output, such as sound mimicry. Here, using fMRI, we revealed a dissociation of activated brain regions preferential for hearing with the intent to imitate and the oral mimicry of animal action sounds versus animal vocalizations as distinct acoustic-semantic categories. This functional dissociation may reflect components of a rudimentary cortical architecture that links systems for processing acoustic-semantic universals of natural sound with motor-related systems mediating oral mimicry at a category level. The observation of different brain regions involved in different aspects of oral mimicry may inform targeted therapies for rehabilitation of functional abilities after stroke.
Collapse
Affiliation(s)
- James W Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA.
| | - Magenta J Silberman
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Jeremy J Donai
- Rockefeller Neurosciences Institute, Department of Communication Sciences and Disorders, West Virginia University, Morgantown, WV 26506, USA
| | - Chris A Frum
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Rockefeller Neurosciences Institute, Department of Physiology, Pharmacology & Neuroscience, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
12
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation. Neuroimage 2017; 159:18-31. [PMID: 28669904 DOI: 10.1016/j.neuroimage.2017.06.053] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 06/20/2017] [Accepted: 06/21/2017] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; The Irish Longitudinal Study on Ageing (TILDA), Dept. Medical Gerontology, TCD, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, EC1M 6BQ, UK; Clinical Physics, Barts Health NHS Trust, London, EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, WC1N 3AR, UK.
| |
Collapse
|
13
|
Carey D, McGettigan C. Magnetic resonance imaging of the brain and vocal tract: Applications to the study of speech production and language learning. Neuropsychologia 2016; 98:201-211. [PMID: 27288115 DOI: 10.1016/j.neuropsychologia.2016.06.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 06/02/2016] [Accepted: 06/05/2016] [Indexed: 10/21/2022]
Abstract
The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic "talent". In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI - specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| |
Collapse
|
14
|
Prat CS, Yamasaki BL, Kluender RA, Stocco A. Resting-state qEEG predicts rate of second language learning in adults. BRAIN AND LANGUAGE 2016; 157-158:44-50. [PMID: 27164483 DOI: 10.1016/j.bandl.2016.04.007] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/29/2015] [Revised: 03/18/2016] [Accepted: 04/10/2016] [Indexed: 06/05/2023]
Abstract
Understanding the neurobiological basis of individual differences in second language acquisition (SLA) is important for research on bilingualism, learning, and neural plasticity. The current study used quantitative electroencephalography (qEEG) to predict SLA in college-aged individuals. Baseline, eyes-closed resting-state qEEG was used to predict language learning rate during eight weeks of French exposure using an immersive, virtual scenario software. Individual qEEG indices predicted up to 60% of the variability in SLA, whereas behavioral indices of fluid intelligence, executive functioning, and working-memory capacity were not correlated with learning rate. Specifically, power in beta and low-gamma frequency ranges over right temporoparietal regions were strongly positively correlated with SLA. These results highlight the utility of resting-state EEG for studying the neurobiological basis of SLA in a relatively construct-free, paradigm-independent manner.
Collapse
Affiliation(s)
- Chantel S Prat
- University of Washington, Department of Psychology and Institute for Learning & Brain Sciences, United States.
| | - Brianna L Yamasaki
- University of Washington, Department of Psychology and Institute for Learning & Brain Sciences, United States
| | - Reina A Kluender
- University of Washington, Department of Psychology and Institute for Learning & Brain Sciences, United States
| | - Andrea Stocco
- University of Washington, Department of Psychology and Institute for Learning & Brain Sciences, United States
| |
Collapse
|
15
|
Pisanski K, Cartei V, McGettigan C, Raine J, Reby D. Voice Modulation: A Window into the Origins of Human Vocal Control? Trends Cogn Sci 2016; 20:304-318. [PMID: 26857619 DOI: 10.1016/j.tics.2016.01.002] [Citation(s) in RCA: 96] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2015] [Revised: 01/05/2016] [Accepted: 01/07/2016] [Indexed: 11/17/2022]
Abstract
An unresolved issue in comparative approaches to speech evolution is the apparent absence of an intermediate vocal communication system between human speech and the less flexible vocal repertoires of other primates. We argue that humans' ability to modulate nonverbal vocal features evolutionarily linked to expression of body size and sex (fundamental and formant frequencies) provides a largely overlooked window into the nature of this intermediate system. Recent behavioral and neural evidence indicates that humans' vocal control abilities, commonly assumed to subserve speech, extend to these nonverbal dimensions. This capacity appears in continuity with context-dependent frequency modulations recently identified in other mammals, including primates, and may represent a living relic of early vocal control abilities that led to articulated human speech.
Collapse
Affiliation(s)
- Katarzyna Pisanski
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK; Institute of Psychology, University of Wrocław, Wrocław, Poland
| | - Valentina Cartei
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| | - Carolyn McGettigan
- Royal Holloway Vocal Communication Laboratory, Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Jordan Raine
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK
| | - David Reby
- Mammal Vocal Communication and Cognition Research Group, School of Psychology, University of Sussex, Brighton, UK.
| |
Collapse
|