1
|
Zou T, Li L, Huang X, Deng C, Wang X, Gao Q, Chen H, Li R. Dynamic causal modeling analysis reveals the modulation of motor cortex and integration in superior temporal gyrus during multisensory speech perception. Cogn Neurodyn 2024; 18:931-946. [PMID: 38826672 PMCID: PMC11143173 DOI: 10.1007/s11571-023-09945-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 02/03/2023] [Accepted: 02/10/2023] [Indexed: 03/06/2023] Open
Abstract
The processing of speech information from various sensory modalities is crucial for human communication. Both left posterior superior temporal gyrus (pSTG) and motor cortex importantly involve in the multisensory speech perception. However, the dynamic integration of primary sensory regions to pSTG and the motor cortex remain unclear. Here, we implemented a behavioral experiment of classical McGurk effect paradigm and acquired the task functional magnetic resonance imaging (fMRI) data during synchronized audiovisual syllabic perception from 63 normal adults. We conducted dynamic causal modeling (DCM) analysis to explore the cross-modal interactions among the left pSTG, left precentral gyrus (PrG), left middle superior temporal gyrus (mSTG), and left fusiform gyrus (FuG). Bayesian model selection favored a winning model that included modulations of connections to PrG (mSTG → PrG, FuG → PrG), from PrG (PrG → mSTG, PrG → FuG), and to pSTG (mSTG → pSTG, FuG → pSTG). Moreover, the coupling strength of the above connections correlated with behavioral McGurk susceptibility. In addition, significant differences were found in the coupling strength of these connections between strong and weak McGurk perceivers. Strong perceivers modulated less inhibitory visual influence, allowed less excitatory auditory information flowing into PrG, but integrated more audiovisual information in pSTG. Taken together, our findings show that the PrG and pSTG interact dynamically with primary cortices during audiovisual speech, and support the motor cortex plays a specifically functional role in modulating the gain and salience between auditory and visual modalities. Supplementary Information The online version contains supplementary material available at 10.1007/s11571-023-09945-z.
Collapse
Affiliation(s)
- Ting Zou
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Liyuan Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xinju Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Chijun Deng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Xuyang Wang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Qing Gao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Huafu Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| | - Rong Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, High-Field Magnetic Resonance Brain Imaging Key Laboratory of Sichuan Province, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054 People’s Republic of China
| |
Collapse
|
2
|
Boeve S, Möttönen R, Smalle EHM. Specificity of Motor Contributions to Auditory Statistical Learning. J Cogn 2024; 7:25. [PMID: 38370867 PMCID: PMC10870951 DOI: 10.5334/joc.351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/31/2024] [Indexed: 02/20/2024] Open
Abstract
Statistical learning is the ability to extract patterned information from continuous sensory signals. Recent evidence suggests that auditory-motor mechanisms play an important role in auditory statistical learning from speech signals. The question remains whether auditory-motor mechanisms support such learning generally or in a domain-specific manner. In Experiment 1, we tested the specificity of motor processes contributing to learning patterns from speech sequences. Participants either whispered or clapped their hands while listening to structured speech. In Experiment 2, we focused on auditory specificity, testing whether whispering equally affects learning patterns from speech and non-speech sequences. Finally, in Experiment 3, we examined whether learning patterns from speech and non-speech sequences are correlated. Whispering had a stronger effect than clapping on learning patterns from speech sequences in Experiment 1. Moreover, whispering impaired statistical learning more strongly from speech than non-speech sequences in Experiment 2. Interestingly, while participants in the non-speech tasks spontaneously synchronized their motor movements with the auditory stream more than participants in the speech tasks, the effect of the motor movements on learning was stronger in the speech domain. Finally, no correlation between speech and non-speech learning was observed. Overall, our findings support the idea that learning statistical patterns from speech versus non-speech relies on segregated mechanisms, and that the speech motor system contributes to auditory statistical learning in a highly specific manner.
Collapse
Affiliation(s)
- Sam Boeve
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Riikka Möttönen
- Cognitive Science, Department of Digital Humanities, University of Helsinki, Helsinki, Finland
| | - Eleonore H. M. Smalle
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
- Department of Developmental Psychology, Tilburg University, Tilburg, Netherlands
| |
Collapse
|
3
|
Alho J, Samuelsson JG, Khan S, Mamashli F, Bharadwaj H, Losh A, McGuiggan NM, Graham S, Nayal Z, Perrachione TK, Joseph RM, Stoodley CJ, Hämäläinen MS, Kenet T. Both stronger and weaker cerebro-cerebellar functional connectivity patterns during processing of spoken sentences in autism spectrum disorder. Hum Brain Mapp 2023; 44:5810-5827. [PMID: 37688547 PMCID: PMC10619366 DOI: 10.1002/hbm.26478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 08/11/2023] [Accepted: 08/20/2023] [Indexed: 09/11/2023] Open
Abstract
Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI. ASD children also had atypically weak functional connectivity in the meaningful versus meaningless speech condition between right cerebellar lobule VI and several left-hemisphere sensorimotor and language regions in later time windows. In contrast, ASD children had atypically strong functional connectivity for in the meaningful versus meaningless speech condition between right cerebellar lobule VI and primary auditory cortical areas in an earlier time window. The atypical functional connectivity patterns in ASD correlated with ASD severity and the ability to inhibit involuntary attention. These findings align with a model where cerebro-cerebellar speech processing mechanisms in ASD are impacted by aberrant stimulus-driven attention, which could result from atypical temporal information and predictions of auditory sensory events by right cerebellar lobule VI.
Collapse
Affiliation(s)
- Jussi Alho
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - John G. Samuelsson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Harvard‐MIT Division of Health Sciences and Technology, Massachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Sheraz Khan
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Hari Bharadwaj
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of Speech, Language, and Hearing Sciences, and Weldon School of Biomedical EngineeringPurdue UniversityWest LafayetteIndianaUSA
| | - Ainsley Losh
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Nicole M. McGuiggan
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Steven Graham
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Zein Nayal
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Tyler K. Perrachione
- Department of Speech, Language, and Hearing SciencesBoston UniversityBostonMassachusettsUSA
| | - Robert M. Joseph
- Department of Anatomy and NeurobiologyBoston University School of MedicineBostonMassachusettsUSA
| | - Catherine J. Stoodley
- Department of PsychologyCollege of Arts and Sciences, American UniversityWashingtonDCUSA
| | - Matti S. Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Tal Kenet
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
4
|
Liang B, Li Y, Zhao W, Du Y. Bilateral human laryngeal motor cortex in perceptual decision of lexical tone and voicing of consonant. Nat Commun 2023; 14:4710. [PMID: 37543659 PMCID: PMC10404239 DOI: 10.1038/s41467-023-40445-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Speech perception is believed to recruit the left motor cortex. However, the exact role of the laryngeal subregion and its right counterpart in speech perception, as well as their temporal patterns of involvement remain unclear. To address these questions, we conducted a hypothesis-driven study, utilizing transcranial magnetic stimulation on the left or right dorsal laryngeal motor cortex (dLMC) when participants performed perceptual decision on Mandarin lexical tone or consonant (voicing contrast) presented with or without noise. We used psychometric function and hierarchical drift-diffusion model to disentangle perceptual sensitivity and dynamic decision-making parameters. Results showed that bilateral dLMCs were engaged with effector specificity, and this engagement was left-lateralized with right upregulation in noise. Furthermore, the dLMC contributed to various decision stages depending on the hemisphere and task difficulty. These findings substantially advance our understanding of the hemispherical lateralization and temporal dynamics of bilateral dLMC in sensorimotor integration during speech perceptual decision-making.
Collapse
Affiliation(s)
- Baishen Liang
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yanchang Li
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wanying Zhao
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
5
|
Choi D, Yeung HH, Werker JF. Sensorimotor foundations of speech perception in infancy. Trends Cogn Sci 2023:S1364-6613(23)00124-9. [PMID: 37302917 DOI: 10.1016/j.tics.2023.05.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 05/09/2023] [Accepted: 05/10/2023] [Indexed: 06/13/2023]
Abstract
The perceptual system for speech is highly organized from early infancy. This organization bootstraps young human learners' ability to acquire their native speech and language from speech input. Here, we review behavioral and neuroimaging evidence that perceptual systems beyond the auditory modality are also specialized for speech in infancy, and that motor and sensorimotor systems can influence speech perception even in infants too young to produce speech-like vocalizations. These investigations complement existing literature on infant vocal development and on the interplay between speech perception and production systems in adults. We conclude that a multimodal speech and language network is present before speech-like vocalizations emerge.
Collapse
Affiliation(s)
- Dawoon Choi
- Department of Psychology, Yale University, Yale, CT, USA.
| | - H Henny Yeung
- Department of Linguistics, Simon Fraser University, Burnaby, BC, Canada
| | - Janet F Werker
- Department of Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
6
|
Dynamic auditory contributions to error detection revealed in the discrimination of Same and Different syllable pairs. Neuropsychologia 2022; 176:108388. [PMID: 36183800 DOI: 10.1016/j.neuropsychologia.2022.108388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/20/2022] [Accepted: 09/27/2022] [Indexed: 11/22/2022]
Abstract
During speech production auditory regions operate in concert with the anterior dorsal stream to facilitate online error detection. As the dorsal stream also is known to activate in speech perception, the purpose of the current study was to probe the role of auditory regions in error detection during auditory discrimination tasks as stimuli are encoded and maintained in working memory. A priori assumptions are that sensory mismatch (i.e., error) occurs during the discrimination of Different (mismatched) but not Same (matched) syllable pairs. Independent component analysis was applied to raw EEG data recorded from 42 participants to identify bilateral auditory alpha rhythms, which were decomposed across time and frequency to reveal robust patterns of event related synchronization (ERS; inhibition) and desynchronization (ERD; processing) over the time course of discrimination events. Results were characterized by bilateral peri-stimulus alpha ERD transitioning to alpha ERS in the late trial epoch, with ERD interpreted as evidence of working memory encoding via Analysis by Synthesis and ERS considered evidence of speech-induced-suppression arising during covert articulatory rehearsal to facilitate working memory maintenance. The transition from ERD to ERS occurred later in the left hemisphere in Different trials than in Same trials, with ERD and ERS temporally overlapping during the early post-stimulus window. Results were interpreted to suggest that the sensory mismatch (i.e., error) arising from the comparison of the first and second syllable elicits further processing in the left hemisphere to support working memory encoding and maintenance. Results are consistent with auditory contributions to error detection during both encoding and maintenance stages of working memory, with encoding stage error detection associated with stimulus concordance and maintenance stage error detection associated with task-specific retention demands.
Collapse
|
7
|
Li Z, Hong B, Wang D, Nolte G, Engel AK, Zhang D. Speaker-listener neural coupling reveals a right-lateralized mechanism for non-native speech-in-noise comprehension. Cereb Cortex 2022; 33:3701-3714. [PMID: 35975617 DOI: 10.1093/cercor/bhac302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2022] [Revised: 07/08/2022] [Accepted: 07/09/2022] [Indexed: 11/14/2022] Open
Abstract
While the increasingly globalized world has brought more and more demands for non-native language communication, the prevalence of background noise in everyday life poses a great challenge to non-native speech comprehension. The present study employed an interbrain approach based on functional near-infrared spectroscopy (fNIRS) to explore how people adapt to comprehend non-native speech information in noise. A group of Korean participants who acquired Chinese as their non-native language was invited to listen to Chinese narratives at 4 noise levels (no noise, 2 dB, -6 dB, and - 9 dB). These narratives were real-life stories spoken by native Chinese speakers. Processing of the non-native speech was associated with significant fNIRS-based listener-speaker neural couplings mainly over the right hemisphere at both the listener's and the speaker's sides. More importantly, the neural couplings from the listener's right superior temporal gyrus, the right middle temporal gyrus, as well as the right postcentral gyrus were found to be positively correlated with their individual comprehension performance at the strongest noise level (-9 dB). These results provide interbrain evidence in support of the right-lateralized mechanism for non-native speech processing and suggest that both an auditory-based and a sensorimotor-based mechanism contributed to the non-native speech-in-noise comprehension.
Collapse
Affiliation(s)
- Zhuoran Li
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| | - Bo Hong
- Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China.,Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| | - Daifa Wang
- School of Biological Science and Medical Engineering, Beihang University, Beijing 100083, China
| | - Guido Nolte
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg Eppendorf, 20246 Hamburg, Germany
| | - Dan Zhang
- Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China.,Tsinghua Laboratory of Brain and Intelligence, Tsinghua University, Beijing 100084, China
| |
Collapse
|
8
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
9
|
Tang DL, McDaniel A, Watkins KE. Disruption of speech motor adaptation with repetitive transcranial magnetic stimulation of the articulatory representation in primary motor cortex. Cortex 2021; 145:115-130. [PMID: 34717269 PMCID: PMC8650828 DOI: 10.1016/j.cortex.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 03/26/2021] [Accepted: 09/13/2021] [Indexed: 11/25/2022]
Abstract
When auditory feedback perturbation is introduced in a predictable way over a number of utterances, speakers learn to compensate by adjusting their own productions, a process known as sensorimotor adaptation. Despite multiple lines of evidence indicating the role of primary motor cortex (M1) in motor learning and memory, whether M1 causally contributes to sensorimotor adaptation in the speech domain remains unclear. Here, we aimed to assay whether temporary disruption of the articulatory representation in left M1 by repetitive transcranial magnetic stimulation (rTMS) impairs speech adaptation. To induce sensorimotor adaptation, the frequencies of first formants (F1) were shifted up and played back to participants when they produced “head”, “bed”, and “dead” repeatedly (the learning phase). A low-frequency rTMS train (.6 Hz, subthreshold, 12 min) over either the tongue or the hand representation of M1 (between-subjects design) was applied before participants experienced altered auditory feedback in the learning phase. We found that the group who received rTMS over the hand representation showed the expected compensatory response for the upwards shift in F1 by significantly reducing F1 and increasing the second formant (F2) frequencies in their productions. In contrast, these expected compensatory changes in both F1 and F2 did not occur in the group that received rTMS over the tongue representation. Critically, rTMS (subthreshold) over the tongue representation did not affect vowel production, which was unchanged from baseline. These results provide direct evidence that the articulatory representation in left M1 causally contributes to sensorimotor learning in speech. Furthermore, these results also suggest that M1 is critical to the network supporting a more global adaptation that aims to move the altered speech production closer to a learnt pattern of speech production used to produce another vowel.
Collapse
Affiliation(s)
- Ding-Lan Tang
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK.
| | - Alexander McDaniel
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| |
Collapse
|
10
|
Jenson D, Saltuklaroglu T. Sensorimotor contributions to working memory differ between the discrimination of Same and Different syllable pairs. Neuropsychologia 2021; 159:107947. [PMID: 34216594 DOI: 10.1016/j.neuropsychologia.2021.107947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 02/01/2021] [Accepted: 06/27/2021] [Indexed: 10/21/2022]
Abstract
Sensorimotor activity during speech perception is both pervasive and highly variable, changing as a function of the cognitive demands imposed by the task. The purpose of the current study was to evaluate whether the discrimination of Same (matched) and Different (unmatched) syllable pairs elicit different patterns of sensorimotor activity as stimuli are processed in working memory. Raw EEG data recorded from 42 participants were decomposed with independent component analysis to identify bilateral sensorimotor mu rhythms from 36 subjects. Time frequency decomposition of mu rhythms revealed concurrent event related desynchronization (ERD) in alpha and beta frequency bands across the peri- and post-stimulus time periods, which were interpreted as evidence of sensorimotor contributions to working memory encoding and maintenance. Left hemisphere alpha/beta ERD was stronger in Different trials than Same trials during the post-stimulus period, while right hemisphere alpha/beta ERD was stronger in Same trials than Different trials. A between-hemispheres contrast revealed no differences during Same trials, while post-stimulus alpha/beta ERD was stronger in the left hemisphere than the right during Different trials. Results were interpreted to suggest that predictive coding mechanisms lead to repetition suppression effects in Same trials. Mismatches arising from predictive coding mechanisms in Different trials shift subsequent working memory processing to the speech-dominant left hemisphere. Findings clarify how sensorimotor activity differentially supports working memory encoding and maintenance stages during speech discrimination tasks and have potential to inform sensorimotor models of speech perception and working memory.
Collapse
Affiliation(s)
- David Jenson
- Washington State University, Elson S. Floyd College of Medicine, Department of Speech and Hearing Sciences, Spokane, WA, USA.
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, College of Health Professions, Department of Audiology and Speech-Pathology, Knoxville, TN, USA
| |
Collapse
|
11
|
Asymmetry of Auditory-Motor Speech Processing is Determined by Language Experience. J Neurosci 2021; 41:1059-1067. [PMID: 33298537 PMCID: PMC7880293 DOI: 10.1523/jneurosci.1977-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech affecting prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized indicating that linguistic function and language experience shape speech processing asymmetries. Here, we investigated the asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal and non-tonal languages. We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulation (TMS) and measured the impact of these disruptions on auditory discrimination (mismatch negativity; MMN) responses to phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that the effect of motor disruptions on processing of tone changes differed between language groups: disruption of the right speech motor cortex suppressed responses to tone changes in non-tonal language speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes in tonal language speakers. In non-tonal language speakers, the effects of disruption of left speech motor cortex on responses to tone changes were inconclusive. For phoneme changes, disruption of left but not right speech motor cortex suppressed responses in both language groups. We conclude that the contributions of the right and left speech motor cortex to auditory speech processing are determined by the functional roles of acoustic cues in the listener's native language.SIGNIFICANCE STATEMENT The principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of processing of speech sounds is affected by low-level acoustic cues, but also by their linguistic function. By combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG), we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. We provide causal evidence that the functional role of the acoustic cues in the listener's native language affects the asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negativity (MMN) responses]. Lateralized top-down motor influences can affect asymmetry of speech processing in the auditory system.
Collapse
|
12
|
Michaelis K, Miyakoshi M, Norato G, Medvedev AV, Turkeltaub PE. Motor engagement relates to accurate perception of phonemes and audiovisual words, but not auditory words. Commun Biol 2021; 4:108. [PMID: 33495548 PMCID: PMC7835217 DOI: 10.1038/s42003-020-01634-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 12/15/2020] [Indexed: 11/12/2022] Open
Abstract
A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus. Michaelis et al. used extra-cranial EEG during a forced-choice identification task to investigate the role of the motor system in speech perception. Their findings suggest that left hemisphere dorsal stream motor areas are dynamically engaged during speech perception based on the properties of the stimulus.
Collapse
Affiliation(s)
- Kelly Michaelis
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA.,Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute for Neurological Disorders and Stroke (NINDS), National Institutes of Health, Bethesda, MD, USA
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, San Diego, CA, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Andrei V Medvedev
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA. .,Research Division, Medstar National Rehabilitation Hospital, Washington, DC, USA.
| |
Collapse
|
13
|
Walker GM, Rollo PS, Tandon N, Hickok G. Effect of Bilateral Opercular Syndrome on Speech Perception. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:335-353. [PMID: 37213256 PMCID: PMC10158595 DOI: 10.1162/nol_a_00037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 03/23/2021] [Indexed: 05/23/2023]
Abstract
Speech perception ability and structural neuroimaging were investigated in two cases of bilateral opercular syndrome. Due to bilateral ablation of the motor control center for the lower face and surrounds, these rare cases provide an opportunity to evaluate the necessity of cortical motor representations for speech perception, a cornerstone of some neurocomputational theories of language processing. Speech perception, including audiovisual integration (i.e., the McGurk effect), was mostly unaffected in these cases, although verbal short-term memory impairment hindered performance on several tasks that are traditionally used to evaluate speech perception. The results suggest that the role of the cortical motor system in speech perception is context-dependent and supplementary, not inherent or necessary.
Collapse
Affiliation(s)
- Grant M. Walker
- Department of Cognitive Sciences, University of California, Irvine
- * Corresponding Author:
| | | | - Nitin Tandon
- Department of Neurosurgery, University of Texas Medical School at Houston
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine
- Department of Language Science, University of California, Irvine
| |
Collapse
|
14
|
Shamma S, Patel P, Mukherjee S, Marion G, Khalighinejad B, Han C, Herrero J, Bickel S, Mehta A, Mesgarani N. Learning Speech Production and Perception through Sensorimotor Interactions. Cereb Cortex Commun 2020; 2:tgaa091. [PMID: 33506209 PMCID: PMC7811190 DOI: 10.1093/texcom/tgaa091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/19/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022] Open
Abstract
Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.
Collapse
Affiliation(s)
- Shihab Shamma
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.,Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Prachi Patel
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Cong Han
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jose Herrero
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh Mehta
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA.,The Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|
15
|
Yao S, Liebenthal E, Juvekar P, Bunevicius A, Vera M, Rigolo L, Golby AJ, Tie Y. Sex Effect on Presurgical Language Mapping in Patients With a Brain Tumor. Front Neurosci 2020; 14:4. [PMID: 32038154 PMCID: PMC6992642 DOI: 10.3389/fnins.2020.00004] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 01/06/2020] [Indexed: 12/12/2022] Open
Abstract
Differences between males and females in brain development and in the organization and hemispheric lateralization of brain functions have been described, including in language. Sex differences in language organization may have important implications for language mapping performed to assess, and minimize neurosurgical risk to, language function. This study examined the effect of sex on the activation and functional connectivity of the brain, measured with presurgical functional magnetic resonance imaging (fMRI) language mapping in patients with a brain tumor. We carried out a retrospective analysis of data from neurosurgical patients treated at our institution who met the criteria of pathological diagnosis (malignant brain tumor), tumor location (left hemisphere), and fMRI paradigms [sentence completion (SC); antonym generation (AG); and resting-state fMRI (rs-fMRI)]. Forty-seven patients (22 females, mean age = 56.0 years) were included in the study. Across the SC and AG tasks, females relative to males showed greater activation in limited areas, including the left inferior frontal gyrus classically associated with language. In contrast, males relative to females showed greater activation in extended areas beyond the classic language network, including the supplementary motor area (SMA) and precentral gyrus. The rs-fMRI functional connectivity of the left SMA in the females was stronger with inferior temporal pole (TP) areas, and in the males with several midline areas. The findings are overall consistent with theories of greater reliance on specialized language areas in females relative to males, and generalized brain areas in males relative to females, for language function. Importantly, the findings suggest that sex could affect fMRI language mapping. Thus, considering sex as a variable in presurgical language mapping merits further investigation.
Collapse
Affiliation(s)
- Shun Yao
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Center for Pituitary Tumor Surgery, Department of Neurosurgery, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
- Wuhan School of Clinical Medicine, Southern Medical University, Wuhan, China
| | - Einat Liebenthal
- Department of Psychiatry, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Institute for Technology in Psychiatry, McLean Hospital, Harvard Medical School, Belmont, MA, United States
| | - Parikshit Juvekar
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Adomas Bunevicius
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Matthew Vera
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Laura Rigolo
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Alexandra J. Golby
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
- Department of Radiology, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| | - Yanmei Tie
- Department of Neurosurgery, Brigham and Women’s Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
16
|
Kwok VPY, Matthews S, Yakpo K, Tan LH. Neural correlates and functional connectivity of lexical tone processing in reading. BRAIN AND LANGUAGE 2019; 196:104662. [PMID: 31352216 DOI: 10.1016/j.bandl.2019.104662] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/21/2019] [Revised: 07/16/2019] [Accepted: 07/17/2019] [Indexed: 06/10/2023]
Abstract
Lexical tone processing in speech is mediated by bilateral superior temporal and inferior prefrontal regions, but little is known concerning the neural circuitries of lexical tone phonology in reading. Using fMRI, we examined the neural systems for lexical tone in visual Chinese word recognition. We found that the extraction of lexical tone phonology in print was subserved by bilateral fronto-parietal regions. Seed-to-voxel analyses showed that functionally connected cortical regions involved right inferior frontal gyrus and SMA, right middle frontal gyrus and right inferior parietal lobule, and SMA and bilateral cingulate gyri. Our results indicate that in Chinese tone reading, a bilateral network of frontal, parietal, motor, and cingulate regions is engaged, without involvement of temporal regions crucial for tone identification in auditory domain. Although neural couplings for lexical tone processing are different in speech and reading to some degree, the motor cortex seems to be a key component independent of modality.
Collapse
Affiliation(s)
- Veronica P Y Kwok
- Center for Brain Disorders and Cognitive Science, Shenzhen University, Shenzhen 518060, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China
| | - Stephen Matthews
- Department of Linguistics, University of Hong Kong, Pokfulam Road, Hong Kong
| | - Kofi Yakpo
- Department of Linguistics, University of Hong Kong, Pokfulam Road, Hong Kong
| | - Li Hai Tan
- Center for Brain Disorders and Cognitive Science, Shenzhen University, Shenzhen 518060, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen 518057, China.
| |
Collapse
|
17
|
Abstract
Recent evidence suggests that the motor system may have a facilitatory role in speech perception during noisy listening conditions. Studies clearly show an association between activity in auditory and motor speech systems, but also hint at a causal role for the motor system in noisy speech perception. However, in the most compelling "causal" studies performance was only measured at a single signal-to-noise ratio (SNR). If listening conditions must be noisy to invoke causal motor involvement, then effects will be contingent on the SNR at which they are tested. We used articulatory suppression to disrupt motor-speech areas while measuring phonemic identification across a range of SNRs. As controls, we also measured phoneme identification during passive listening, mandible gesturing, and foot-tapping conditions. Two-parameter (threshold, slope) psychometric functions were fit to the data in each condition. Our findings indicate: (1) no effect of experimental task on psychometric function slopes; (2) a small effect of articulatory suppression, in particular, on psychometric function thresholds. The size of the latter effect was 1 dB (~5% correct) on average, suggesting, at best, a minor modulatory role of the speech motor system in perception.
Collapse
Affiliation(s)
- Ryan C Stokes
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA.
| | - Jonathan H Venezia
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| | - Gregory Hickok
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| |
Collapse
|
18
|
Möttönen R, Adank P, Skipper JI. Sensorimotor Speech Processing: A Brief Introduction to the Special Issue. BRAIN AND LANGUAGE 2018; 187:18. [PMID: 30502818 DOI: 10.1016/j.bandl.2018.11.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Affiliation(s)
- Riikka Möttönen
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Patti Adank
- Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| | - Jeremy I Skipper
- Experimental Psychology, Division of Psychology and Language Sciences, University College London, London, United Kingdom
| |
Collapse
|
19
|
Hardy CJD, Bond RL, Jaisin K, Marshall CR, Russell LL, Dick K, Crutch SJ, Rohrer JD, Warren JD. Sensitivity of Speech Output to Delayed Auditory Feedback in Primary Progressive Aphasias. Front Neurol 2018; 9:894. [PMID: 30420829 PMCID: PMC6216253 DOI: 10.3389/fneur.2018.00894] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2018] [Accepted: 10/02/2018] [Indexed: 12/14/2022] Open
Abstract
Delayed auditory feedback (DAF) is a classical paradigm for probing sensori-motor interactions in speech output and has been studied in various disorders associated with speech dysfluency and aphasia. However, little information is available concerning the effects of DAF on degenerating language networks in primary progressive aphasia: the paradigmatic "language-led dementias." Here we studied two forms of speech output (reading aloud and propositional speech) under natural listening conditions (no feedback delay) and under DAF at 200 ms, in a cohort of 19 patients representing all major primary progressive aphasia syndromes vs. healthy older individuals and patients with other canonical dementia syndromes (typical Alzheimer's disease and behavioral variant frontotemporal dementia). Healthy controls and most syndromic groups showed a quantitatively or qualitatively similar profile of reduced speech output rate and increased speech error rate under DAF relative to natural auditory feedback. However, there was no group effect on propositional speech output rate under DAF in patients with nonfluent primary progressive aphasia and logopenic aphasia. Importantly, there was considerable individual variation in DAF sensitivity within syndromic groups and some patients in each group (though no healthy controls) apparently benefited from DAF, showing paradoxically increased speech output rate and/or reduced speech error rate under DAF. This work suggests that DAF may be an informative probe of pathophysiological mechanisms underpinning primary progressive aphasia: identification of "DAF responders" may open up an avenue to novel therapeutic applications.
Collapse
Affiliation(s)
- Chris J D Hardy
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Rebecca L Bond
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Kankamol Jaisin
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom.,Department of Psychiatry, Faculty of Medicine, Thammasat University, Pathum Thani, Thailand
| | - Charles R Marshall
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Lucy L Russell
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Katrina Dick
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Sebastian J Crutch
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Jonathan D Rohrer
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| | - Jason D Warren
- Department of Neurodegenerative Disease, Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|