1
|
Green GD, Jacewicz E, Santosa H, Arzbecker LJ, Fox RA. Evaluating Speaker-Listener Cognitive Effort in Speech Communication Through Brain-to-Brain Synchrony: A Pilot Functional Near-Infrared Spectroscopy Investigation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1339-1359. [PMID: 38535722 DOI: 10.1044/2024_jslhr-23-00476] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2024]
Abstract
PURPOSE We explore a new approach to the study of cognitive effort involved in listening to speech by measuring the brain activity in a listener in relation to the brain activity in a speaker. We hypothesize that the strength of this brain-to-brain synchrony (coupling) reflects the magnitude of cognitive effort involved in verbal communication and includes both listening effort and speaking effort. We investigate whether interbrain synchrony is greater in native-to-native versus native-to-nonnative communication using functional near-infrared spectroscopy (fNIRS). METHOD Two speakers participated, a native speaker of American English and a native speaker of Korean who spoke English as a second language. Each speaker was fitted with the fNIRS cap and told short stories. The native English speaker provided the English narratives, and the Korean speaker provided both the nonnative (accented) English and Korean narratives. In separate sessions, fNIRS data were obtained from seven English monolingual participants ages 20-24 years who listened to each speaker's stories. After listening to each story in native and nonnative English, they retold the content, and their transcripts and audio recordings were analyzed for comprehension and discourse fluency, measured in the number of hesitations and articulation rate. No story retellings were obtained for narratives in Korean (an incomprehensible language for English listeners). Utilizing fNIRS technique termed sequential scanning, we quantified the brain-to-brain synchronization in each speaker-listener dyad. RESULTS For native-to-native dyads, multiple brain regions associated with various linguistic and executive functions were activated. There was a weaker coupling for native-to-nonnative dyads, and only the brain regions associated with higher order cognitive processes and functions were synchronized. All listeners understood the content of all stories, but they hesitated significantly more when retelling stories told in accented English. The nonnative speaker hesitated significantly more often than the native speaker and had a significantly slower articulation rate. There was no brain-to-brain coupling during listening to Korean, indicating a break in communication when listeners failed to comprehend the speaker. CONCLUSIONS We found that effortful speech processing decreased interbrain synchrony and delayed comprehension processes. The obtained brain-based and behavioral patterns are consistent with our proposal that cognitive effort in verbal communication pertains to both the listener and the speaker and that brain-to-brain synchrony can be an indicator of differences in their cumulative communicative effort. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25452142.
Collapse
Affiliation(s)
- Geoff D Green
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Ewa Jacewicz
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | | | - Lian J Arzbecker
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Robert A Fox
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| |
Collapse
|
2
|
Brain activity during shadowing of audiovisual cocktail party speech, contributions of auditory-motor integration and selective attention. Sci Rep 2022; 12:18789. [PMID: 36335137 PMCID: PMC9637225 DOI: 10.1038/s41598-022-22041-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 10/07/2022] [Indexed: 11/06/2022] Open
Abstract
Selective listening to cocktail-party speech involves a network of auditory and inferior frontal cortical regions. However, cognitive and motor cortical regions are differentially activated depending on whether the task emphasizes semantic or phonological aspects of speech. Here we tested whether processing of cocktail-party speech differs when participants perform a shadowing (immediate speech repetition) task compared to an attentive listening task in the presence of irrelevant speech. Participants viewed audiovisual dialogues with concurrent distracting speech during functional imaging. Participants either attentively listened to the dialogue, overtly repeated (i.e., shadowed) attended speech, or performed visual or speech motor control tasks where they did not attend to speech and responses were not related to the speech input. Dialogues were presented with good or poor auditory and visual quality. As a novel result, we show that attentive processing of speech activated the same network of sensory and frontal regions during listening and shadowing. However, in the superior temporal gyrus (STG), peak activations during shadowing were posterior to those during listening, suggesting that an anterior-posterior distinction is present for motor vs. perceptual processing of speech already at the level of the auditory cortex. We also found that activations along the dorsal auditory processing stream were specifically associated with the shadowing task. These activations are likely to be due to complex interactions between perceptual, attention dependent speech processing and motor speech generation that matches the heard speech. Our results suggest that interactions between perceptual and motor processing of speech relies on a distributed network of temporal and motor regions rather than any specific anatomical landmark as suggested by some previous studies.
Collapse
|
3
|
tDCS modulates speech perception and production in second language learners. Sci Rep 2022; 12:16212. [PMID: 36171463 PMCID: PMC9519965 DOI: 10.1038/s41598-022-20512-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 09/14/2022] [Indexed: 11/08/2022] Open
Abstract
Accurate identification and pronunciation of nonnative speech sounds can be particularly challenging for adult language learners. The current study tested the effects of a brief musical training combined with transcranial direct current stimulation (tDCS) on speech perception and production in a second language (L2). The sample comprised 36 native Hebrew speakers, aged 18-38, who studied English as L2 in a formal setting and had little musical training. Training encompassed musical perception tasks with feedback (i.e., timbre, duration, and tonal memory) and concurrent tDCS applied over the left posterior auditory-related cortex (including posterior superior temporal gyrus and planum temporale). Participants were randomly assigned to anodal or sham stimulation. Musical perception, L2 speech perception (measured by a categorical AXB discrimination task) and speech production (measured by a speech imitation task) were tested before and after training. There were no tDCS-dependent effects on musical perception post-training. However, only participants who received active stimulation showed increased accuracy of L2 phoneme discrimination and greater change in the acoustic properties of L2 speech sound production (i.e., second formant frequency in vowels and center of gravity in consonants). The results of this study suggest neuromodulation can facilitate the processing of nonnative speech sounds in adult learners.
Collapse
|
4
|
Marin-Marin L, Costumero V, Ávila C, Pliatsikas C. Dynamic Effects of Immersive Bilingualism on Cortical and Subcortical Grey Matter Volumes. Front Psychol 2022; 13:886222. [PMID: 35586234 PMCID: PMC9109104 DOI: 10.3389/fpsyg.2022.886222] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Accepted: 03/30/2022] [Indexed: 11/13/2022] Open
Abstract
Bilingualism has been shown to induce neuroplasticity in the brain, but conflicting evidence regarding its specific effects in grey matter continues to emerge, probably due to methodological differences between studies, as well as approaches that may miss the variability and dynamicity of bilingual experience. In our study, we devised a continuous score of bilingual experiences and we investigated their non-linear effects on regional GM volume in a sample of young healthy participants from an immersive and naturalistic bilingual environment. We focused our analyses on cortical and subcortical regions that had been previously proposed as part of the bilingual speech pipeline and language control network. Our results showed a non-linear relationship between bilingualism score and grey matter volume of the inferior frontal gyrus. We also found linear increases in volumes of putamen and cerebellum as a function of bilingualism score. These results go in line with predictions for immersive and naturalistic bilingual environments with increased intensity and diversity of language use and provide further evidence supporting the dynamicity of bilingualism’s effects on brain structure.
Collapse
Affiliation(s)
- Lidón Marin-Marin
- Neuropsychology and Functional Neuroimaging Group, Department of Basic Psychology, Clinic and Psychobiology, Universitat Jaume I, Castelló de la Plana, Spain
| | - Victor Costumero
- Neuropsychology and Functional Neuroimaging Group, Department of Basic Psychology, Clinic and Psychobiology, Universitat Jaume I, Castelló de la Plana, Spain
| | - César Ávila
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom
| | - Christos Pliatsikas
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, United Kingdom.,Centro de Investigación Nebrija en Cognición, Universidad Nebrija, Madrid, Spain
| |
Collapse
|
5
|
Cai X, Yin Y, Zhang Q. Online Control of Voice Intensity in Late Bilinguals' First and Second Language Speech Production: Evidence From Unexpected and Brief Noise Masking. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1471-1489. [PMID: 33830851 DOI: 10.1044/2021_jslhr-20-00330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Speech production requires the combined efforts of feedforward control and feedback control subsystems. The primary purpose of this study is to explore whether the relative weighting of auditory feedback control is different between the first language (L1) and the second language (L2) production for late bilinguals. The authors also make an exploratory investigation into how bilinguals' speech fluency and speech perception relate to their auditory feedback control. Method Twenty Chinese-English bilinguals named Chinese or English bisyllabic words, while being exposed to 30- or 60-dB unexpected brief masking noise. Variables of language (L1 or L2) and noise condition (quiet, weak noise, or strong noise) were manipulated in the experiment. L1 and L2 speech fluency tests and an L2 perception test were also included to measure bilinguals' speech fluency and auditory acuity. Results Peak intensity analyses indicated that the intensity increases in the weak noise and strong noise conditions were larger in L2-English than L1-Chinese production. Intensity contour analysis showed that the intensity increases in both languages had an onset around 80-140 ms, a peak around 220-250 ms, and persisted till 400 ms post vocalization onset. Correlation analyses also revealed that poorer speech fluency or L2 auditory acuity was associated with larger Lombard effect. Conclusions For late bilinguals, the reliance on auditory feedback control is heavier in L2 than in L1 production. We empirically supported a relation between speech fluency and the relative weighting of auditory feedback control, and provided the first evidence for the production-perception link in L2 speech motor control.
Collapse
Affiliation(s)
- Xiao Cai
- Department of Psychology, Renmin University of China, Beijing
| | - Yulong Yin
- Department of Psychology, Renmin University of China, Beijing
| | - Qingfang Zhang
- Department of Psychology, Renmin University of China, Beijing
| |
Collapse
|
6
|
Li JJ, Ayala S, Harel D, Shiller DM, McAllister T. Individual predictors of response to biofeedback training for second-language production. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4625. [PMID: 31893730 PMCID: PMC6937206 DOI: 10.1121/1.5139423] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/07/2019] [Revised: 11/14/2019] [Accepted: 11/14/2019] [Indexed: 05/23/2023]
Abstract
While recent research suggests that visual biofeedback can facilitate speech production training in clinical populations and second language (L2) learners, individual learners' responsiveness to biofeedback is highly variable. This study investigated the hypothesis that the type of biofeedback provided, visual-acoustic versus ultrasound, could interact with individuals' acuity in auditory and somatosensory domains. Specifically, it was hypothesized that learners with lower acuity in a sensory domain would show greater learning in response to biofeedback targeting that domain. Production variability and phonological awareness were also investigated as predictors. Sixty female native speakers of English received 30 min of training, randomly assigned to feature visual-acoustic or ultrasound biofeedback, for each of two Mandarin vowels. On average, participants showed a moderate magnitude of improvement (decrease in Euclidean distance from a native-speaker target) across both vowels and biofeedback conditions. The hypothesis of an interaction between sensory acuity and biofeedback type was not supported, but phonological awareness and production variability were predictive of learning gains, consistent with previous research. Specifically, high phonological awareness and low production variability post-training were associated with better outcomes, although these effects were mediated by vowel target. This line of research could have implications for personalized learning in both L2 pedagogy and clinical practice.
Collapse
Affiliation(s)
- Joanne Jingwen Li
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, Suite 900, New York, New York 10012, USA
| | - Samantha Ayala
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, Suite 900, New York, New York 10012, USA
| | - Daphna Harel
- Department of Applied Statistics, Social Science, and Humanities, New York University, 246 Greene Street, 3rd Floor, New York, New York 10003, USA
| | - Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, Case Postale 6128, Succursale Centre-ville, Montréal, Québec, H3C 3J7, Canada
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, Suite 900, New York, New York 10012, USA
| |
Collapse
|
7
|
Interaction of the effects associated with auditory-motor integration and attention-engaging listening tasks. Neuropsychologia 2019; 124:322-336. [PMID: 30444980 DOI: 10.1016/j.neuropsychologia.2018.11.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 11/08/2018] [Indexed: 11/22/2022]
Abstract
A number of previous studies have implicated regions in posterior auditory cortex (AC) in auditory-motor integration during speech production. Other studies, in turn, have shown that activation in AC and adjacent regions in the inferior parietal lobule (IPL) is strongly modulated during active listening and depends on task requirements. The present fMRI study investigated whether auditory-motor effects interact with those related to active listening tasks in AC and IPL. In separate task blocks, our subjects performed either auditory discrimination or 2-back memory tasks on phonemic or nonphonemic vowels. They responded to targets by either overtly repeating the last vowel of a target pair, overtly producing a given response vowel, or by pressing a response button. We hypothesized that the requirements for auditory-motor integration, and the associated activation, would be stronger during repetition than production responses and during repetition of nonphonemic than phonemic vowels. We also hypothesized that if auditory-motor effects are independent of task-dependent modulations, then the auditory-motor effects should not differ during discrimination and 2-back tasks. We found that activation in AC and IPL was significantly modulated by task (discrimination vs. 2-back), vocal-response type (repetition vs. production), and motor-response type (vocal vs. button). Motor-response and task effects interacted in IPL but not in AC. Overall, the results support the view that regions in posterior AC are important in auditory-motor integration. However, the present study shows that activation in wide AC and IPL regions is modulated by the motor requirements of active listening tasks in a more general manner. Further, the results suggest that activation modulations in AC associated with attention-engaging listening tasks and those associated with auditory-motor performance are mediated by independent mechanisms.
Collapse
|
8
|
Carey D, Nolan H, Kenny RA, Meaney J. Cortical covariance networks in ageing: Cross-sectional data from the Irish Longitudinal Study on Ageing (TILDA). Neuropsychologia 2019; 122:51-61. [DOI: 10.1016/j.neuropsychologia.2018.11.013] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2018] [Revised: 11/24/2018] [Accepted: 11/26/2018] [Indexed: 01/06/2023]
|
9
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Vocal Tract Images Reveal Neural Representations of Sensorimotor Transformation During Speech Imitation. Cereb Cortex 2018; 27:3064-3079. [PMID: 28334401 PMCID: PMC5939209 DOI: 10.1093/cercor/bhx056] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2016] [Indexed: 12/23/2022] Open
Abstract
Imitating speech necessitates the transformation from sensory targets to vocal tract motor output, yet little is known about the representational basis of this process in the human brain. Here, we address this question by using real-time MR imaging (rtMRI) of the vocal tract and functional MRI (fMRI) of the brain in a speech imitation paradigm. Participants trained on imitating a native vowel and a similar nonnative vowel that required lip rounding. Later, participants imitated these vowels and an untrained vowel pair during separate fMRI and rtMRI runs. Univariate fMRI analyses revealed that regions including left inferior frontal gyrus were more active during sensorimotor transformation (ST) and production of nonnative vowels, compared with native vowels; further, ST for nonnative vowels activated somatomotor cortex bilaterally, compared with ST of native vowels. Using test representational similarity analysis (RSA) models constructed from participants’ vocal tract images and from stimulus formant distances, we found that RSA searchlight analyses of fMRI data showed either type of model could be represented in somatomotor, temporal, cerebellar, and hippocampal neural activation patterns during ST. We thus provide the first evidence of widespread and robust cortical and subcortical neural representation of vocal tract and/or formant parameters, during prearticulatory ST.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, London TW20 0EX, UK.,Combined Universities Brain Imaging Centre, Royal Holloway, University of London, London TW20 0EX, UK.,The Irish Longitudinal Study on Ageing (TILDA), Department of Medical Gerontology, Trinity College Dublin, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, London EC1M 6BQ, UK.,Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, London WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, London WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, London TW20 0EX, UK.,Combined Universities Brain Imaging Centre, Royal Holloway, University of London, London TW20 0EX, UK.,Institute of Cognitive Neuroscience, University College London, London WC1N 3AR, UK
| |
Collapse
|
10
|
Carey D, Miquel ME, Evans BG, Adank P, McGettigan C. Functional brain outcomes of L2 speech learning emerge during sensorimotor transformation. Neuroimage 2017; 159:18-31. [PMID: 28669904 DOI: 10.1016/j.neuroimage.2017.06.053] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2017] [Revised: 06/20/2017] [Accepted: 06/21/2017] [Indexed: 11/18/2022] Open
Abstract
Sensorimotor transformation (ST) may be a critical process in mapping perceived speech input onto non-native (L2) phonemes, in support of subsequent speech production. Yet, little is known concerning the role of ST with respect to L2 speech, particularly where learned L2 phones (e.g., vowels) must be produced in more complex lexical contexts (e.g., multi-syllabic words). Here, we charted the behavioral and neural outcomes of producing trained L2 vowels at word level, using a speech imitation paradigm and functional MRI. We asked whether participants would be able to faithfully imitate trained L2 vowels when they occurred in non-words of varying complexity (one or three syllables). Moreover, we related individual differences in imitation success during training to BOLD activation during ST (i.e., pre-imitation listening), and during later imitation. We predicted that superior temporal and peri-Sylvian speech regions would show increased activation as a function of item complexity and non-nativeness of vowels, during ST. We further anticipated that pre-scan acoustic learning performance would predict BOLD activation for non-native (vs. native) speech during ST and imitation. We found individual differences in imitation success for training on the non-native vowel tokens in isolation; these were preserved in a subsequent task, during imitation of mono- and trisyllabic words containing those vowels. fMRI data revealed a widespread network involved in ST, modulated by both vowel nativeness and utterance complexity: superior temporal activation increased monotonically with complexity, showing greater activation for non-native than native vowels when presented in isolation and in trisyllables, but not in monosyllables. Individual differences analyses showed that learning versus lack of improvement on the non-native vowel during pre-scan training predicted increased ST activation for non-native compared with native items, at insular cortex, pre-SMA/SMA, and cerebellum. Our results hold implications for the importance of ST as a process underlying successful imitation of non-native speech.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; The Irish Longitudinal Study on Ageing (TILDA), Dept. Medical Gerontology, TCD, Dublin, Ireland
| | - Marc E Miquel
- William Harvey Research Institute, Queen Mary, University of London, EC1M 6BQ, UK; Clinical Physics, Barts Health NHS Trust, London, EC1A 7BE, UK
| | - Bronwen G Evans
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Patti Adank
- Department of Speech, Hearing & Phonetic Sciences, University College London, WC1E 6BT, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, TW20 0EX, UK; Combined Universities Brain Imaging Centre, Royal Holloway, University of London, TW20 0EX, UK; Institute of Cognitive Neuroscience, University College London, WC1N 3AR, UK.
| |
Collapse
|
11
|
Kim SY, Liu L, Cao F. How does first language (L1) influence second language (L2) reading in the brain? Evidence from Korean-English and Chinese-English bilinguals. BRAIN AND LANGUAGE 2017; 171:1-13. [PMID: 28437658 DOI: 10.1016/j.bandl.2017.04.003] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/01/2016] [Revised: 02/21/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
To examine how L1 influences L2 reading in the brain, two late bilingual groups, Korean-English (KE) and Chinese-English (CE), performed a visual word rhyming judgment task in their L2 (English) and were compared to L1 control groups (i.e., KK and CC). The results indicated that the L2 activation is similar to the L1 activation for both KE and CE language groups. In addition, conjunction analyses revealed that the right inferior frontal gyrus and medial frontal gyrus were more activated in KK and KE than CC and CE, suggesting that these regions are more involved in Korean speakers than Chinese speakers for both L1 and L2. Finally, an ROI analysis at the left middle frontal gyrus revealed greater activation for CE than for KE and a positive correlation with accuracy in CE, but a negative correlation in KE. Taken together, we found evidence that important brain regions for L1 are carried over to L2 reading, maybe more so in highly proficient bilinguals.
Collapse
Affiliation(s)
- Say Young Kim
- Department of Psychology, National University of Singapore, Singapore; Department of English Language and Literature, Sejong University, Seoul, Korea
| | - Li Liu
- State Key Lab of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing, PR China
| | - Fan Cao
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, USA.
| |
Collapse
|
12
|
Barbeau EB, Chai XJ, Chen JK, Soles J, Berken J, Baum S, Watkins KE, Klein D. The role of the left inferior parietal lobule in second language learning: An intensive language training fMRI study. Neuropsychologia 2017; 98:169-176. [DOI: 10.1016/j.neuropsychologia.2016.10.003] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2016] [Revised: 08/15/2016] [Accepted: 10/06/2016] [Indexed: 11/28/2022]
|
13
|
Hervais-Adelman A, Moser-Mercer B, Murray MM, Golestani N. Cortical thickness increases after simultaneous interpretation training. Neuropsychologia 2017; 98:212-219. [DOI: 10.1016/j.neuropsychologia.2017.01.008] [Citation(s) in RCA: 42] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Revised: 01/05/2017] [Accepted: 01/07/2017] [Indexed: 11/27/2022]
|
14
|
Mitchell RLC, Jazdzyk A, Stets M, Kotz SA. Recruitment of Language-, Emotion- and Speech-Timing Associated Brain Regions for Expressing Emotional Prosody: Investigation of Functional Neuroanatomy with fMRI. Front Hum Neurosci 2016; 10:518. [PMID: 27803656 PMCID: PMC5067951 DOI: 10.3389/fnhum.2016.00518] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2016] [Accepted: 09/29/2016] [Indexed: 12/02/2022] Open
Abstract
We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information.
Collapse
Affiliation(s)
- Rachel L C Mitchell
- Centre for Affective Disorders, Institute of Psychiatry Psychology and Neuroscience, King's College London London, UK
| | | | - Manuela Stets
- Department of Psychology, University of Essex Colchester, UK
| | - Sonja A Kotz
- Section of Neuropsychology and Psychopharmacology, Maastricht University Maastricht, Netherlands
| |
Collapse
|
15
|
Carey D, McGettigan C. Magnetic resonance imaging of the brain and vocal tract: Applications to the study of speech production and language learning. Neuropsychologia 2016; 98:201-211. [PMID: 27288115 DOI: 10.1016/j.neuropsychologia.2016.06.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 06/02/2016] [Accepted: 06/05/2016] [Indexed: 10/21/2022]
Abstract
The human vocal system is highly plastic, allowing for the flexible expression of language, mood and intentions. However, this plasticity is not stable throughout the life span, and it is well documented that adult learners encounter greater difficulty than children in acquiring the sounds of foreign languages. Researchers have used magnetic resonance imaging (MRI) to interrogate the neural substrates of vocal imitation and learning, and the correlates of individual differences in phonetic "talent". In parallel, a growing body of work using MR technology to directly image the vocal tract in real time during speech has offered primarily descriptive accounts of phonetic variation within and across languages. In this paper, we review the contribution of neural MRI to our understanding of vocal learning, and give an overview of vocal tract imaging and its potential to inform the field. We propose methods by which our understanding of speech production and learning could be advanced through the combined measurement of articulation and brain activity using MRI - specifically, we describe a novel paradigm, developed in our laboratory, that uses both MRI techniques to for the first time map directly between neural, articulatory and acoustic data in the investigation of vocalisation. This non-invasive, multimodal imaging method could be used to track central and peripheral correlates of spoken language learning, and speech recovery in clinical settings, as well as provide insights into potential sites for targeted neural interventions.
Collapse
Affiliation(s)
- Daniel Carey
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Carolyn McGettigan
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| |
Collapse
|
16
|
Elmer S, Kühnis J. Functional Connectivity in the Left Dorsal Stream Facilitates Simultaneous Language Translation: An EEG Study. Front Hum Neurosci 2016; 10:60. [PMID: 26924976 PMCID: PMC4759282 DOI: 10.3389/fnhum.2016.00060] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2015] [Accepted: 02/08/2016] [Indexed: 11/23/2022] Open
Abstract
Cortical speech processing is dependent on the mutual interdependence of two distinctive processing streams supporting sound-to-meaning (i.e., ventral stream) and sound-to-articulation (i.e., dorsal stream) mapping. Here, we compared the strengths of intracranial functional connectivity between two main hubs of the dorsal stream, namely the left auditory-related cortex (ARC) and Broca’s region, in a sample of simultaneous interpreters (SIs) and multilingual control subjects while the participants performed a mixed and unmixed auditory semantic decision task. Under normal listening conditions such kind of tasks are known to initiate a spread of activation along the ventral stream. However, due to extensive and specific training, here we predicted that SIs will more strongly recruit the dorsal pathway in order to pre-activate the speech codes of the corresponding translation. In line with this reasoning, EEG results demonstrate increased left-hemispheric theta phase synchronization in SLI compared to multilingual control participants during early task-related processing stages. In addition, within the SI group functional connectivity strength in the left dorsal pathway was positively related to the cumulative number of training hours across lifespan, and inversely correlated with the age of training commencement. Hence, we propose that the alignment of neuronal oscillations between brain regions involved in “hearing” and “speaking” results from an intertwining of training, sensitive period, and predisposition.
Collapse
Affiliation(s)
- Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich Zurich, Switzerland
| | - Jürg Kühnis
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich Zurich, Switzerland
| |
Collapse
|
17
|
Thothathiri M, Rattinger M. Ventral and dorsal streams for choosing word order during sentence production. Proc Natl Acad Sci U S A 2015; 112:15456-61. [PMID: 26621706 PMCID: PMC4687588 DOI: 10.1073/pnas.1514711112] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Proficient language use requires speakers to vary word order and choose between different ways of expressing the same meaning. Prior statistical associations between individual verbs and different word orders are known to influence speakers' choices, but the underlying neural mechanisms are unknown. Here we show that distinct neural pathways are used for verbs with different statistical associations. We manipulated statistical experience by training participants in a language containing novel verbs and two alternative word orders (agent-before-patient, AP; patient-before-agent, PA). Some verbs appeared exclusively in AP, others exclusively in PA, and yet others in both orders. Subsequently, we used sparse sampling neuroimaging to examine the neural substrates as participants generated new sentences in the scanner. Behaviorally, participants showed an overall preference for AP order, but also increased PA order for verbs experienced in that order, reflecting statistical learning. Functional activation and connectivity analyses revealed distinct networks underlying the increased PA production. Verbs experienced in both orders during training preferentially recruited a ventral stream, indicating the use of conceptual processing for mapping meaning to word order. In contrast, verbs experienced solely in PA order recruited dorsal pathways, indicating the use of selective attention and sensorimotor integration for choosing words in the right order. These results show that the brain tracks the structural associations of individual verbs and that the same structural output may be achieved via ventral or dorsal streams, depending on the type of regularities in the input.
Collapse
Affiliation(s)
- Malathi Thothathiri
- Department of Speech and Hearing Science, The George Washington University, Washington, DC 20052
| | - Michelle Rattinger
- Department of Speech and Hearing Science, The George Washington University, Washington, DC 20052
| |
Collapse
|
18
|
Simmonds AJ. A hypothesis on improving foreign accents by optimizing variability in vocal learning brain circuits. Front Hum Neurosci 2015; 9:606. [PMID: 26582984 PMCID: PMC4631821 DOI: 10.3389/fnhum.2015.00606] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2015] [Accepted: 10/20/2015] [Indexed: 11/13/2022] Open
Abstract
Rapid vocal motor learning is observed when acquiring a language in early childhood, or learning to speak another language later in life. Accurate pronunciation is one of the hardest things for late learners to master and they are almost always left with a non-native accent. Here, I propose a novel hypothesis that this accent could be improved by optimizing variability in vocal learning brain circuits during learning. Much of the neurobiology of human vocal motor learning has been inferred from studies on songbirds. Jarvis (2004) proposed the hypothesis that as in songbirds there are two pathways in humans: one for learning speech (the striatal vocal learning pathway), and one for production of previously learnt speech (the motor pathway). Learning new motor sequences necessary for accurate non-native pronunciation is challenging and I argue that in late learners of a foreign language the vocal learning pathway becomes inactive prematurely. The motor pathway is engaged once again and learners maintain their original native motor patterns for producing speech, resulting in speaking with a foreign accent. Further, I argue that variability in neural activity within vocal motor circuitry generates vocal variability that supports accurate non-native pronunciation. Recent theoretical and experimental work on motor learning suggests that variability in the motor movement is necessary for the development of expertise. I propose that there is little trial-by-trial variability when using the motor pathway. When using the vocal learning pathway variability gradually increases, reflecting an exploratory phase in which learners try out different ways of pronouncing words, before decreasing and stabilizing once the “best” performance has been identified. The hypothesis proposed here could be tested using behavioral interventions that optimize variability and engage the vocal learning pathway for longer, with the prediction that this would allow learners to develop new motor patterns that result in more native-like pronunciation.
Collapse
Affiliation(s)
- Anna J Simmonds
- Division of Brain Sciences, Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Imperial College London London, UK
| |
Collapse
|
19
|
Kartushina N, Hervais-Adelman A, Frauenfelder UH, Golestani N. The effect of phonetic production training with visual feedback on the perception and production of foreign speech sounds. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:817-832. [PMID: 26328698 DOI: 10.1121/1.4926561] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Second-language learners often experience major difficulties in producing non-native speech sounds. This paper introduces a training method that uses a real-time analysis of the acoustic properties of vowels produced by non-native speakers to provide them with immediate, trial-by-trial visual feedback about their articulation alongside that of the same vowels produced by native speakers. The Mahalanobis acoustic distance between non-native productions and target native acoustic spaces was used to assess L2 production accuracy. The experiment shows that 1 h of training per vowel improves the production of four non-native Danish vowels: the learners' productions were closer to the corresponding Danish target vowels after training. The production performance of a control group remained unchanged. Comparisons of pre- and post-training vowel discrimination performance in the experimental group showed improvements in perception. Correlational analyses of training-related changes in production and perception revealed no relationship. These results suggest, first, that this training method is effective in improving non-native vowel production. Second, training purely on production improves perception. Finally, it appears that improvements in production and perception do not systematically progress at equal rates within individuals.
Collapse
Affiliation(s)
- Natalia Kartushina
- Laboratory of Experimental Psycholinguistics, Faculty of Psychology and Educational Sciences, University of Geneva, 42 bd du Pont d'Arve, 1205 Geneva, Switzerland
| | - Alexis Hervais-Adelman
- Neuroscience Department, Brain and Language Lab, Faculty of Medicine, University of Geneva, Campus Biotech, 9 Chemin des Mines, 1211 Geneva, Switzerland
| | - Ulrich Hans Frauenfelder
- Laboratory of Experimental Psycholinguistics, Faculty of Psychology and Educational Sciences, University of Geneva, 42 bd du Pont d'Arve, 1205 Geneva, Switzerland
| | - Narly Golestani
- Neuroscience Department, Brain and Language Lab, Faculty of Medicine, University of Geneva, Campus Biotech, 9 Chemin des Mines, 1211 Geneva, Switzerland
| |
Collapse
|
20
|
A trade-off between somatosensory and auditory related brain activity during object naming but not reading. J Neurosci 2015; 35:4751-9. [PMID: 25788691 DOI: 10.1523/jneurosci.2292-14.2015] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.
Collapse
|
21
|
Pamplona GSP, Santos Neto GS, Rosset SRE, Rogers BP, Salmon CEG. Analyzing the association between functional connectivity of the brain and intellectual performance. Front Hum Neurosci 2015; 9:61. [PMID: 25713528 PMCID: PMC4322636 DOI: 10.3389/fnhum.2015.00061] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Accepted: 01/23/2015] [Indexed: 11/13/2022] Open
Abstract
Measurements of functional connectivity support the hypothesis that the brain is composed of distinct networks with anatomically separated nodes but common functionality. A few studies have suggested that intellectual performance may be associated with greater functional connectivity in the fronto-parietal network and enhanced global efficiency. In this fMRI study, we performed an exploratory analysis of the relationship between the brain's functional connectivity and intelligence scores derived from the Portuguese language version of the Wechsler Adult Intelligence Scale (WAIS-III) in a sample of 29 people, born and raised in Brazil. We examined functional connectivity between 82 regions, including graph theoretic properties of the overall network. Some previous findings were extended to the Portuguese-speaking population, specifically the presence of small-world organization of the brain and relationships of intelligence with connectivity of frontal, pre-central, parietal, occipital, fusiform and supramarginal gyrus, and caudate nucleus. Verbal comprehension was associated with global network efficiency, a new finding.
Collapse
Affiliation(s)
- Gustavo S P Pamplona
- InBrain Lab, Department of Physics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo São Paulo, Brazil
| | - Gérson S Santos Neto
- Faculty of Medicine of Ribeirão Preto, University of São Paulo São Paulo, Brazil
| | - Sara R E Rosset
- Faculty of Medicine of Ribeirão Preto, University of São Paulo São Paulo, Brazil
| | - Baxter P Rogers
- Department of Radiology and Radiological Sciences, Department of Biomedical Engineering, Institute of Imaging Science, Vanderbilt University Nashville, TN, USA
| | - Carlos E G Salmon
- InBrain Lab, Department of Physics, Faculty of Philosophy, Sciences and Letters of Ribeirão Preto, University of São Paulo São Paulo, Brazil
| |
Collapse
|
22
|
Behroozmand R, Shebek R, Hansen DR, Oya H, Robin DA, Howard MA, Greenlee JDW. Sensory-motor networks involved in speech production and motor control: an fMRI study. Neuroimage 2015; 109:418-28. [PMID: 25623499 DOI: 10.1016/j.neuroimage.2015.01.040] [Citation(s) in RCA: 126] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2014] [Revised: 01/05/2015] [Accepted: 01/17/2015] [Indexed: 10/24/2022] Open
Abstract
Speaking is one of the most complex motor behaviors developed to facilitate human communication. The underlying neural mechanisms of speech involve sensory-motor interactions that incorporate feedback information for online monitoring and control of produced speech sounds. In the present study, we adopted an auditory feedback pitch perturbation paradigm and combined it with functional magnetic resonance imaging (fMRI) recordings in order to identify brain areas involved in speech production and motor control. Subjects underwent fMRI scanning while they produced a steady vowel sound /a/ (speaking) or listened to the playback of their own vowel production (playback). During each condition, the auditory feedback from vowel production was either normal (no perturbation) or perturbed by an upward (+600 cents) pitch-shift stimulus randomly. Analysis of BOLD responses during speaking (with and without shift) vs. rest revealed activation of a complex network including bilateral superior temporal gyrus (STG), Heschl's gyrus, precentral gyrus, supplementary motor area (SMA), Rolandic operculum, postcentral gyrus and right inferior frontal gyrus (IFG). Performance correlation analysis showed that the subjects produced compensatory vocal responses that significantly correlated with BOLD response increases in bilateral STG and left precentral gyrus. However, during playback, the activation network was limited to cortical auditory areas including bilateral STG and Heschl's gyrus. Moreover, the contrast between speaking vs. playback highlighted a distinct functional network that included bilateral precentral gyrus, SMA, IFG, postcentral gyrus and insula. These findings suggest that speech motor control involves feedback error detection in sensory (e.g. auditory) cortices that subsequently activate motor-related areas for the adjustment of speech parameters during speaking.
Collapse
Affiliation(s)
- Roozbeh Behroozmand
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States; Speech Neuroscience Lab, Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, United States.
| | - Rachel Shebek
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States
| | - Daniel R Hansen
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States
| | - Hiroyuki Oya
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States
| | - Donald A Robin
- Research Imaging Institute, Departments of Neurology, Radiology and Biomedical Engineering, University of Texas Health Science Center San Antonio, San Antonio, TX 78229, United States
| | - Matthew A Howard
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States
| | - Jeremy D W Greenlee
- Human Brain Research Lab, Department of Neurosurgery, University of Iowa, Iowa City, IA 52242, United States
| |
Collapse
|
23
|
Sensory-motor integration during speech production localizes to both left and right plana temporale. J Neurosci 2014; 34:12963-72. [PMID: 25253845 DOI: 10.1523/jneurosci.0336-14.2014] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech production relies on fine voluntary motor control of respiration, phonation, and articulation. The cortical initiation of complex sequences of coordinated movements is thought to result in parallel outputs, one directed toward motor neurons while the "efference copy" projects to auditory and somatosensory fields. It is proposed that the latter encodes the expected sensory consequences of speech and compares expected with actual postarticulatory sensory feedback. Previous functional neuroimaging evidence has indicated that the cortical target for the merging of feedforward motor and feedback sensory signals is left-lateralized and lies at the junction of the supratemporal plane with the parietal operculum, located mainly in the posterior half of the planum temporale (PT). The design of these studies required participants to imagine speaking or generating nonverbal vocalizations in response to external stimuli. The resulting assumption is that verbal and nonverbal vocal motor imagery activates neural systems that integrate the sensory-motor consequences of speech, even in the absence of primary motor cortical activity or sensory feedback. The present human functional magnetic resonance imaging study used univariate and multivariate analyses to investigate both overt and covert (internally generated) propositional and nonpropositional speech (noun definition and counting, respectively). Activity in response to overt, but not covert, speech was present in bilateral anterior PT, with no increased activity observed in posterior PT or parietal opercula for either speech type. On this evidence, the response of the left and right anterior PTs better fulfills the criteria for sensory target and state maps during overt speech production.
Collapse
|
24
|
Similarities and differences in brain activation and functional connectivity in first and second language reading: Evidence from Chinese learners of English. Neuropsychologia 2014; 63:275-84. [DOI: 10.1016/j.neuropsychologia.2014.09.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2014] [Revised: 07/17/2014] [Accepted: 09/01/2014] [Indexed: 11/22/2022]
|
25
|
Tomasino B, Marin D, Canderan C, Maieron M, Budai R, Fabbro F, Skrap M. Involuntary switching into the native language induced by electrocortical stimulation of the superior temporal gyrus: a multimodal mapping study. Neuropsychologia 2014; 62:87-100. [PMID: 25058058 DOI: 10.1016/j.neuropsychologia.2014.07.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2013] [Revised: 07/11/2014] [Accepted: 07/12/2014] [Indexed: 11/25/2022]
Abstract
We describe involuntary language switching from L2 to L1 evoked by electro-stimulation in the superior temporal gyrus in a 30-year-old right-handed Serbian (L1) speaker who was also a late Italian learner (L2). The patient underwent awake brain surgery. Stimulation of other portions of the exposed cortex did not cause language switching as did not stimulation of the left inferior frontal gyrus, where we evoked a speech arrest. Stimulation effects on language switching were selective, namely, interfered with counting behaviour but not with object naming. The coordinates of the positive site were combined with functional and fibre tracking (DTI) data. Results showed that the language switching site belonged to a significant fMRI cluster in the left superior temporal gyrus/supramarginal gyrus found activated for both L1 and L2, and for both the patient and controls, and did not overlap with the inferior fronto-occipital fasciculus (IFOF), the inferior longitudinal fasciculus (ILF) and the superior longitudinal fasciculus (SLF). This area, also known as Stp, has a role in phonological processing. Language switching phenomenon we observed can be partly explained by transient dysfunction of the feed-forward control mechanism hypothesized by the DIVA (Directions Into Velocities of Articulators) model (Golfinopoulos, E., Tourville, J. A., & Guenther, F. H. (2010). The integration of large-scale neural network modeling and functional brain imaging in speech motor control.
Collapse
Affiliation(s)
- Barbara Tomasino
- IRCCS "E. Medea", Polo Regionale del Friuli Venezia Giulia, via della Bontà, 7, San Vito al Tagliamento, PN 33078, Italy.
| | - Dario Marin
- IRCCS "E. Medea", Polo Regionale del Friuli Venezia Giulia, via della Bontà, 7, San Vito al Tagliamento, PN 33078, Italy
| | - Cinzia Canderan
- IRCCS "E. Medea", Polo Regionale del Friuli Venezia Giulia, via della Bontà, 7, San Vito al Tagliamento, PN 33078, Italy
| | - Marta Maieron
- Fisica Medica A.O.S. Maria della Misericordia, Udine, Italy
| | - Riccardo Budai
- Unità Operativa di Neurologia, Neurofisiopatologia, A.O.S. Maria della Misericordia, Udine, Italy
| | - Franco Fabbro
- IRCCS "E. Medea", Polo Regionale del Friuli Venezia Giulia, via della Bontà, 7, San Vito al Tagliamento, PN 33078, Italy; Dipartimento di Scienze Umane, Università di Udine, Italy
| | - Miran Skrap
- Unità Operativa di Neurochirurgia, A.O.S. Maria della Misericordia, Udine, Italy
| |
Collapse
|
26
|
Krishnan S, Leech R, Mercure E, Lloyd-Fox S, Dick F. Convergent and Divergent fMRI Responses in Children and Adults to Increasing Language Production Demands. Cereb Cortex 2014; 25:3261-77. [PMID: 24907249 PMCID: PMC4585486 DOI: 10.1093/cercor/bhu120] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity.
Collapse
Affiliation(s)
- Saloni Krishnan
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Robert Leech
- Department of Neurosciences and Mental Health, Imperial College London, London, UK
| | | | - Sarah Lloyd-Fox
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Frederic Dick
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| |
Collapse
|
27
|
Scott AD, Wylezinska M, Birch MJ, Miquel ME. Speech MRI: morphology and function. Phys Med 2014; 30:604-18. [PMID: 24880679 DOI: 10.1016/j.ejmp.2014.05.001] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2014] [Revised: 04/24/2014] [Accepted: 05/01/2014] [Indexed: 11/27/2022] Open
Abstract
Magnetic Resonance Imaging (MRI) plays an increasing role in the study of speech. This article reviews the MRI literature of anatomical imaging, imaging for acoustic modelling and dynamic imaging. It describes existing imaging techniques attempting to meet the challenges of imaging the upper airway during speech and examines the remaining hurdles and future research directions.
Collapse
Affiliation(s)
- Andrew D Scott
- Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, United Kingdom; NIHR Cardiovascular Biomedical Research Unit, The Royal Brompton Hospital, Sydney Street, London SW3 6NP, United Kingdom
| | - Marzena Wylezinska
- Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, United Kingdom; Barts and The London NIHR CVBRU, London Chest Hospital, London E2 9JX, United Kingdom
| | - Malcolm J Birch
- Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, United Kingdom
| | - Marc E Miquel
- Clinical Physics, Barts Health NHS Trust, London EC1A 7BE, United Kingdom; Barts and The London NIHR CVBRU, London Chest Hospital, London E2 9JX, United Kingdom.
| |
Collapse
|
28
|
Simmonds AJ, Leech R, Iverson P, Wise RJS. The response of the anterior striatum during adult human vocal learning. J Neurophysiol 2014; 112:792-801. [PMID: 24805076 DOI: 10.1152/jn.00901.2013] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Research on mammals predicts that the anterior striatum is a central component of human motor learning. However, because vocalizations in most mammals are innate, much of the neurobiology of human vocal learning has been inferred from studies on songbirds. Essential for song learning is a pathway, the homolog of mammalian cortical-basal ganglia "loops," which includes the avian striatum. The present functional magnetic resonance imaging (fMRI) study investigated adult human vocal learning, a skill that persists throughout life, albeit imperfectly given that late-acquired languages are spoken with an accent. Monolingual adult participants were scanned while repeating novel non-native words. After training on the pronunciation of half the words for 1 wk, participants underwent a second scan. During scanning there was no external feedback on performance. Activity declined sharply in left and right anterior striatum, both within and between scanning sessions, and this change was independent of training and performance. This indicates that adult speakers rapidly adapt to the novel articulatory movements, possibly by using motor sequences from their native speech to approximate those required for the novel speech sounds. Improved accuracy correlated only with activity in motor-sensory perisylvian cortex. We propose that future studies on vocal learning, using different behavioral and pharmacological manipulations, will provide insights into adult striatal plasticity and its potential for modification in both educational and clinical contexts.
Collapse
Affiliation(s)
- Anna J Simmonds
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| | - Robert Leech
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| | - Paul Iverson
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, United Kingdom
| | - Richard J S Wise
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom; and
| |
Collapse
|
29
|
Abstract
The processing of brain information relies on the organization of neuronal networks and circuits that in the end must provide the substrate for human cognition. However, the presence of highly complex and multirelay neuronal interactions has limited our ability to disentangle the assemblies of brain systems. The present review article focuses on the latest developments to understand the architecture of functional streams of the human brain at the large-scale level. Particularly, this article presents a comprehensive framework and recent findings about how the highly modular sensory cortex, such as the visual, somatosensory, auditory, as well as motor cortex areas, connects to more parallel-organized cortical hubs in the brain's functional connectome.
Collapse
Affiliation(s)
- Jorge Sepulcre
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| |
Collapse
|
30
|
Hashizume H, Taki Y, Sassa Y, Thyreau B, Asano M, Asano K, Takeuchi H, Nouchi R, Kotozaki Y, Jeong H, Sugiura M, Kawashima R. Developmental changes in brain activation involved in the production of novel speech sounds in children. Hum Brain Mapp 2014; 35:4079-89. [PMID: 24585739 DOI: 10.1002/hbm.22460] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2012] [Revised: 11/04/2013] [Accepted: 12/18/2013] [Indexed: 11/09/2022] Open
Abstract
Older children are more successful at producing unfamiliar, non-native speech sounds than younger children during the initial stages of learning. To reveal the neuronal underpinning of the age-related increase in the accuracy of non-native speech production, we examined the developmental changes in activation involved in the production of novel speech sounds using functional magnetic resonance imaging. Healthy right-handed children (aged 6-18 years) were scanned while performing an overt repetition task and a perceptual task involving aurally presented non-native and native syllables. Productions of non-native speech sounds were recorded and evaluated by native speakers. The mouth regions in the bilateral primary sensorimotor areas were activated more significantly during the repetition task relative to the perceptual task. The hemodynamic response in the left inferior frontal gyrus pars opercularis (IFG pOp) specific to non-native speech sound production (defined by prior hypothesis) increased with age. Additionally, the accuracy of non-native speech sound production increased with age. These results provide the first evidence of developmental changes in the neural processes underlying the production of novel speech sounds. Our data further suggest that the recruitment of the left IFG pOp during the production of novel speech sounds was possibly enhanced due to the maturation of the neuronal circuits needed for speech motor planning. This, in turn, would lead to improvement in the ability to immediately imitate non-native speech.
Collapse
Affiliation(s)
- Hiroshi Hashizume
- Division of Developmental Cognitive Neuroscience, Institute of Development, Aging and Cancer, Tohoku University, Sendai, 980-8575, Japan
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
31
|
Seehausen M, Kazzer P, Bajbouj M, Heekeren HR, Jacobs AM, Klann-Delius G, Menninghaus W, Prehn K. Talking about social conflict in the MRI scanner: Neural correlates of being empathized with. Neuroimage 2014; 84:951-61. [DOI: 10.1016/j.neuroimage.2013.09.056] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 09/24/2013] [Accepted: 09/26/2013] [Indexed: 10/26/2022] Open
|
32
|
Abstract
Sensory comprehension and motor production of language symbols form the basis of human speech. Classical neuroanatomy has pointed to Wernicke's and Broca's areas as playing important roles in the integration of these 2 functions. However, recent studies have proposed that more direct pathways may exist between auditory input and motor output, bypassing Wernicke's and Broca's areas. We used functional network analyses to investigate potential auditory-motor (A-M) couplings between language-related cortices. We found that operculum parietale (OP) interconnectivity in region OP4 seems to play a critical role in the A-M integration of the brain. This finding supports a novel landscape in the functional neuroarchitecture that sustains language in humans.
Collapse
Affiliation(s)
- Jorge Sepulcre
- Division of Nuclear Medicine and Molecular Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA and Athinioula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| |
Collapse
|
33
|
Simmonds AJ, Wise RJS, Collins C, Redjep O, Sharp DJ, Iverson P, Leech R. Parallel systems in the control of speech. Hum Brain Mapp 2013; 35:1930-43. [PMID: 23723184 DOI: 10.1002/hbm.22303] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Revised: 01/24/2013] [Accepted: 03/19/2013] [Indexed: 11/10/2022] Open
Abstract
Modern neuroimaging techniques have advanced our understanding of the distributed anatomy of speech production, beyond that inferred from clinico-pathological correlations. However, much remains unknown about functional interactions between anatomically distinct components of this speech production network. One reason for this is the need to separate spatially overlapping neural signals supporting diverse cortical functions. We took three separate human functional magnetic resonance imaging (fMRI) datasets (two speech production, one "rest"). In each we decomposed the neural activity within the left posterior perisylvian speech region into discrete components. This decomposition robustly identified two overlapping spatio-temporal components, one centered on the left posterior superior temporal gyrus (pSTG), the other on the adjacent ventral anterior parietal lobe (vAPL). The pSTG was functionally connected with bilateral superior temporal and inferior frontal regions, whereas the vAPL was connected with other parietal regions, lateral and medial. Surprisingly, the components displayed spatial anti-correlation, in which the negative functional connectivity of each component overlapped with the other component's positive functional connectivity, suggesting that these two systems operate separately and possibly in competition. The speech tasks reliably modulated activity in both pSTG and vAPL suggesting they are involved in speech production, but their activity patterns dissociate in response to different speech demands. These components were also identified in subjects at "rest" and not engaged in overt speech production. These findings indicate that the neural architecture underlying speech production involves parallel distinct components that converge within posterior peri-sylvian cortex, explaining, in part, why this region is so important for speech production.
Collapse
Affiliation(s)
- Anna J Simmonds
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
| | | | | | | | | | | | | |
Collapse
|
34
|
McGettigan C, Eisner F, Agnew ZK, Manly T, Wisbey D, Scott SK. T'ain't what you say, it's the way that you say it--left insula and inferior frontal cortex work in interaction with superior temporal regions to control the performance of vocal impersonations. J Cogn Neurosci 2013; 25:1875-86. [PMID: 23691984 DOI: 10.1162/jocn_a_00427] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Historically, the study of human identity perception has focused on faces, but the voice is also central to our expressions and experiences of identity [Belin, P., Fecteau, S., & Bedard, C. Thinking the voice: Neural correlates of voice perception. Trends in Cognitive Sciences, 8, 129-135, 2004]. Our voices are highly flexible and dynamic; talkers speak differently, depending on their health, emotional state, and the social setting, as well as extrinsic factors such as background noise. However, to date, there have been no studies of the neural correlates of identity modulation in speech production. In the current fMRI experiment, we measured the neural activity supporting controlled voice change in adult participants performing spoken impressions. We reveal that deliberate modulation of vocal identity recruits the left anterior insula and inferior frontal gyrus, supporting the planning of novel articulations. Bilateral sites in posterior superior temporal/inferior parietal cortex and a region in right middle/anterior STS showed greater responses during the emulation of specific vocal identities than for impressions of generic accents. Using functional connectivity analyses, we describe roles for these three sites in their interactions with the brain regions supporting speech planning and production. Our findings mark a significant step toward understanding the neural control of vocal identity, with wider implications for the cognitive control of voluntary motor acts.
Collapse
|
35
|
Abstract
During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.
Collapse
|
36
|
Howell P, Jiang J, Peng D, Lu C. Neural control of rising and falling tones in Mandarin speakers who stutter. BRAIN AND LANGUAGE 2012; 123:211-221. [PMID: 23122701 DOI: 10.1016/j.bandl.2012.09.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2012] [Revised: 07/18/2012] [Accepted: 09/25/2012] [Indexed: 06/01/2023]
Abstract
Neural control of rising and falling tones in Mandarin people who stutter (PWS) was examined by comparing with that which occurs in fluent speakers [Howell, Jiang, Peng, and Lu (2012). Neural control of fundamental frequency rise and fall in Mandarin tones. Brain and Language, 121(1), 35-46]. Nine PWS and nine controls were scanned. Functional connectivity analysis showed that the connections between the insula and LMC and between the LMC and the putamen differed significantly between PWS and fluent speakers during both rising and falling tones. The connection between the insula and the brainstem differed between PWS and fluent speakers only during the falling tone. These results indicated the neural control for the rising tone and the falling tone are affected in PWS. Moreover, whilst both rising and falling tones were affected in PWS, falling-tone control appeared to be affected more.
Collapse
Affiliation(s)
- Peter Howell
- Division of Psychology and Language Sciences, University College London, UK.
| | | | | | | |
Collapse
|
37
|
Elmer S. The Investigation of Simultaneous Interpreters as an Alternative Approach to Address the Signature of Multilingual Speech Processing. ZEITSCHRIFT FUR NEUROPSYCHOLOGIE 2012. [DOI: 10.1024/1016-264x/a000068] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
In the field of cognitive neuroscience, understanding the functional, temporal, and anatomical characteristics of multilingual speech processing has previously been a topic of intense investigations. In this article, I will attempt to describe how the investigation of simultaneous interpreters can be used as a fruitful and alternative approach for better comprehending the neuronal signature of multilingual speech processing, foreign language acquisition, as well as the functional and structural adaptivity of the human brain in general. Thereby, I will primarily focus on the commonalities underlying different degrees of speech competence rather than on the differences. In this context, particular emphasis will be placed on the contribution of extra-linguistic brain functions which are necessary for accommodating cognitive and motor control mechanisms in the multilingual brain. Certainly, the framework outlined in this article will not replace the meanwhile established psycholinguistic or neuroscientific models of speech processing, but only attempts to provide a novel and alternative perspective.
Collapse
Affiliation(s)
- Stefan Elmer
- Division Neuropsychology, Institute of Psychology, University of Zurich, Switzerland
| |
Collapse
|
38
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1298] [Impact Index Per Article: 108.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
39
|
Simmonds AJ, Wise RJS, Leech R. Two tongues, one brain: imaging bilingual speech production. Front Psychol 2011; 2:166. [PMID: 21811481 PMCID: PMC3139956 DOI: 10.3389/fpsyg.2011.00166] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2011] [Accepted: 07/02/2011] [Indexed: 11/13/2022] Open
Abstract
This review considers speaking in a second language from the perspective of motor-sensory control. Previous studies relating brain function to the prior acquisition of two or more languages (neurobilingualism) have investigated the differential demands made on linguistic representations and processes, and the role of domain-general cognitive control systems when speakers switch between languages. In contrast to the detailed discussions on these higher functions, typically articulation is considered only as an underspecified stage of simple motor output. The present review considers speaking in a second language in terms of the accompanying foreign accent, which places demands on the integration of motor and sensory discharges not encountered when articulating in the most fluent language. We consider why there has been so little emphasis on this aspect of bilingualism to date, before turning to the motor and sensory complexities involved in learning to speak a second language as an adult. This must involve retuning the neural circuits involved in the motor control of articulation, to enable rapid unfamiliar sequences of movements to be performed with the goal of approximating, as closely as possible, the speech of a native speaker. Accompanying changes in motor networks is experience-dependent plasticity in auditory and somatosensory cortices to integrate auditory memories of the target sounds, copies of feedforward commands from premotor and primary motor cortex and post-articulatory auditory and somatosensory feedback. Finally, we consider the implications of taking a motor-sensory perspective on speaking a second language, both pedagogical regarding non-native learners and clinical regarding speakers with neurological conditions such as dysarthria.
Collapse
Affiliation(s)
- Anna J Simmonds
- Medical Research Council Clinical Sciences Centre, Imperial College London UK
| | | | | |
Collapse
|