1
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
2
|
Vogt C, Floegel M, Kasper J, Gispert-Sánchez S, Kell CA. Oxytocinergic modulation of speech production-a double-blind placebo-controlled fMRI study. Soc Cogn Affect Neurosci 2023; 18:nsad035. [PMID: 37384576 PMCID: PMC10348401 DOI: 10.1093/scan/nsad035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2022] [Revised: 05/21/2023] [Accepted: 06/16/2023] [Indexed: 07/01/2023] Open
Abstract
Many socio-affective behaviors, such as speech, are modulated by oxytocin. While oxytocin modulates speech perception, it is not known whether it also affects speech production. Here, we investigated effects of oxytocin administration and interactions with the functional rs53576 oxytocin receptor (OXTR) polymorphism on produced speech and its underlying brain activity. During functional magnetic resonance imaging, 52 healthy male participants read sentences out loud with either neutral or happy intonation, a covert reading condition served as a common baseline. Participants were studied once under the influence of intranasal oxytocin and in another session under placebo. Oxytocin administration increased the second formant of produced vowels. This acoustic feature has previously been associated with speech valence; however, the acoustic differences were not perceptually distinguishable in our experimental setting. When preparing to speak, oxytocin enhanced brain activity in sensorimotor cortices and regions of both dorsal and right ventral speech processing streams, as well as subcortical and cortical limbic and executive control regions. In some of these regions, the rs53576 OXTR polymorphism modulated oxytocin administration-related brain activity. Oxytocin also gated cortical-basal ganglia circuits involved in the generation of happy prosody. Our findings suggest that several neural processes underlying speech production are modulated by oxytocin, including control of not only affective intonation but also sensorimotor aspects during emotionally neutral speech.
Collapse
Affiliation(s)
- Charlotte Vogt
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Mareike Floegel
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Johannes Kasper
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| | - Suzana Gispert-Sánchez
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
- Experimental Neurology, Department of Neurology, Goethe University Frankfurt, Frankfurt am Main 60528, Germany
| | - Christian A Kell
- Department of Neurology and Brain Imaging Center Frankfurt, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt am Main 60528, Germany
| |
Collapse
|
3
|
Yuan B, Xie H, Wang Z, Xu Y, Zhang H, Liu J, Chen L, Li C, Tan S, Lin Z, Hu X, Gu T, Lu J, Liu D, Wu J. The domain-separation language network dynamics in resting state support its flexible functional segregation and integration during language and speech processing. Neuroimage 2023; 274:120132. [PMID: 37105337 DOI: 10.1016/j.neuroimage.2023.120132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Revised: 04/05/2023] [Accepted: 04/21/2023] [Indexed: 04/29/2023] Open
Abstract
Modern linguistic theories and network science propose that language and speech processing are organized into hierarchical, segregated large-scale subnetworks, with a core of dorsal (phonological) stream and ventral (semantic) stream. The two streams are asymmetrically recruited in receptive and expressive language or speech tasks, which showed flexible functional segregation and integration. We hypothesized that the functional segregation of the two streams was supported by the underlying network segregation. A dynamic conditional correlation approach was employed to construct framewise time-varying language networks and k-means clustering was employed to investigate the temporal-reoccurring patterns. We found that the framewise language network dynamics in resting state were robustly clustered into four states, which dynamically reconfigured following a domain-separation manner. Spatially, the hub distributions of the first three states highly resembled the neurobiology of speech perception and lexical-phonological processing, speech production, and semantic processing, respectively. The fourth state was characterized by the weakest functional connectivity and was regarded as a baseline state. Temporally, the first three states appeared exclusively in limited time bins (∼15%), and most of the time (> 55%), state 4 was dominant. Machine learning-based dFC-linguistics prediction analyses showed that dFCs of the four states significantly predicted individual linguistic performance. These findings suggest a domain-separation manner of language network dynamics in resting state, which forms a dynamic "meta-network" framework to support flexible functional segregation and integration during language and speech processing.
Collapse
Affiliation(s)
- Binke Yuan
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, China; Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China.
| | - Hui Xie
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China; Department of Psychology, The University of Hong Kong, Hong Kong, China
| | - Zhihao Wang
- CNRS - Centre d'Economie de la Sorbonne, Panthéon-Sorbonne University, France
| | - Yangwen Xu
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento 38123, Italy
| | - Hanqing Zhang
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Jiaxuan Liu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Lifeng Chen
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Chaoqun Li
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Shiyao Tan
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Zonghui Lin
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Xin Hu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Tianyi Gu
- Institute for Brain Research and Rehabilitation, South China Normal University, Guangzhou, China
| | - Junfeng Lu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Dongqiang Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian, PR China.
| | - Jinsong Wu
- Glioma Surgery Division, Neurologic Surgery Department, Huashan Hospital, Shanghai Medical College, Fudan University, Shanghai, China; Brain Function Laboratory, Neurosurgical Institute of Fudan University, Shanghai, China; Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| |
Collapse
|
4
|
Nathaniel U, Weiss Y, Barouch B, Katzir T, Bitan T. Start shallow and grow deep: The development of a Hebrew reading brain. Neuropsychologia 2022; 176:108376. [PMID: 36181772 DOI: 10.1016/j.neuropsychologia.2022.108376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 08/06/2022] [Accepted: 09/25/2022] [Indexed: 11/27/2022]
Abstract
Brain plasticity implies that readers of different orthographies can have different reading networks. Theoretical models suggest that reading acquisition in transparent orthographies relies on mapping smaller orthographic units to phonology, than reading opaque orthographies; but what are the neural mechanisms underlying this difference? Hebrew has a transparent (pointed) script used for beginners, and a non-transparent script used for skilled readers. The current study examined the developmental changes in brain regions associated with phonological and orthographic processes during reading pointed and un-pointed words. Our results highlight some changes that are universal in reading development, such as a developmental increase in frontal involvement (in bilateral inferior frontal gyrus (IFG) pars opercularis), and increase in left asymmetry (in IFG pars opercularis and superior temporal gyrus, STG) of the reading network. Our results also showed a developmental increase in activation in STG, which stands in contrast to previous studies in other orthographies. We further found an interaction of word length and diacritics in bilateral STG and VWFA across both groups. These findings suggest that children slightly adjust their reading depending on orthographic transparency, relying on smaller units when reading a transparent script and on larger units when reading an opaque script. Our results also showed that phonological abilities across groups correlated with activation in the VWFA, regardless of transparency, supporting the continued role of phonology at all levels of orthographic transparency. Our findings are consistent with multiple route reading models, in which both phonological and orthographic processing of multiple size units continue to play a role in children's reading of transparent and opaque scripts during reading development. The results further demonstrate the importance of taking into account differences between orthographies when constructing neural models of reading acquisition.
Collapse
Affiliation(s)
- Upasana Nathaniel
- Psychology Department and Institute for Information Processing and Decision Making, University of Haifa, Israel; Integrated Brain and Behavior Center (IBBRC), University of Haifa, Israel.
| | - Yael Weiss
- Institute for Learning and Brain Sciences, University of Washington, Seattle, WA, USA
| | - Bechor Barouch
- Psychology Department and Institute for Information Processing and Decision Making, University of Haifa, Israel
| | - Tami Katzir
- Department of Learning Disabilities, The E.J. Safra Brain Research Center for the Study of Learning Disabilities, University of Haifa, Israel
| | - Tali Bitan
- Psychology Department and Institute for Information Processing and Decision Making, University of Haifa, Israel; Integrated Brain and Behavior Center (IBBRC), University of Haifa, Israel; Department of Speech Language Pathology, University of Toronto, Toronto, Canada
| |
Collapse
|
5
|
Liu X, He Y, Gao Y, Booth JR, Zhang L, Zhang S, Lu C, Liu L. Developmental differences of large-scale functional brain networks for spoken word processing. BRAIN AND LANGUAGE 2022; 231:105149. [PMID: 35777141 DOI: 10.1016/j.bandl.2022.105149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 06/03/2022] [Accepted: 06/13/2022] [Indexed: 06/15/2023]
Abstract
A dual-stream dissociation for separate phonological and semantic processing has been implicated in adults' language processing, but it is unclear how this dissociation emerges with development. By employing a graph-theory based brain network analysis, we compared functional interaction architecture during a rhyming and meaning judgment task of children (aged 8-12) with adults (aged 19-26). We found adults had stronger functional connectivity strength than children between bilateral inferior frontal gyri and left inferior parietal lobule in the rhyming task, between middle frontal gyrus and angular gyrus, and within occipital areas in the meaning task. Meanwhile, adults but not children manifested between-task differences in these properties. In contrast, children had stronger functional connectivity strength or nodal degree in Heschl's gyrus, superior temporal gyrus, and subcortical areas. Our findings indicated spoken word processing development is characterized by increased functional specialization, relying on the dorsal and ventral pathways for phonological and semantic processing respectively.
Collapse
Affiliation(s)
- Xin Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| | - Yin He
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yue Gao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - James R Booth
- Department of Psychology and Human Development, Vanderbilt University, Nashville, TN 37203, USA
| | - Lihuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Shudong Zhang
- Faculty of Education, Beijing Normal University, Beijing 100875, China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/ McGovern, Institute for Brain Research, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
6
|
Yue Q, Martin RC. Phonological Working Memory Representations in the Left Inferior Parietal Lobe in the Face of Distraction and Neural Stimulation. Front Hum Neurosci 2022; 16:890483. [PMID: 35814962 PMCID: PMC9259857 DOI: 10.3389/fnhum.2022.890483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 05/30/2022] [Indexed: 11/21/2022] Open
Abstract
The neural basis of phonological working memory (WM) was investigated through an examination of the effects of irrelevant speech distractors and disruptive neural stimulation from transcranial magnetic stimulation (TMS). Embedded processes models argue that the same regions involved in speech perception are used to support phonological WM whereas buffer models assume that a region separate from speech perception regions is used to support WM. Thus, according to the embedded processes approach but not the buffer approach, irrelevant speech and TMS to the speech perception region should disrupt the decoding of phonological WM representations. According to the buffer account, decoding of WM items should be possible in the buffer region despite distraction and should be disrupted with TMS to this region. Experiment 1 used fMRI and representational similarity analyses (RSA) with a delayed recognition memory paradigm using nonword stimuli. Results showed that decoding of memory items in the speech perception regions (superior temporal gyrus, STG) was possible in the absence of distractors. However, the decoding evidence in the left STG was susceptible to interference from distractors presented during the delay period whereas decoding in the proposed buffer region (supramarginal gyrus, SMG) persisted. Experiment 2 examined the causal roles of the speech processing region and the buffer region in phonological WM performance using TMS. TMS to the SMG during the early delay period caused a disruption in recognition performance for the memory nonwords, whereas stimulations at the STG and an occipital control region did not affect WM performance. Taken together, results from the two experiments are consistent with predictions of a buffer model of phonological WM, pointing to a critical role of the left SMG in maintaining phonological representations.
Collapse
Affiliation(s)
- Qiuhai Yue
- Department of Psychological Sciences, Rice University, Houston, TX, United States
- Department of Psychology, Vanderbilt University, Nashville, TN, United States
- *Correspondence: Qiuhai Yue Randi C. Martin
| | - Randi C. Martin
- Department of Psychological Sciences, Rice University, Houston, TX, United States
- *Correspondence: Qiuhai Yue Randi C. Martin
| |
Collapse
|
7
|
Zhang L, Du Y. Lip movements enhance speech representations and effective connectivity in auditory dorsal stream. Neuroimage 2022; 257:119311. [PMID: 35589000 DOI: 10.1016/j.neuroimage.2022.119311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China 200031; Chinese Institute for Brain Research, Beijing, China 102206.
| |
Collapse
|
8
|
Tamura S, Hirose N, Mitsudo T, Hoaki N, Nakamura I, Onitsuka T, Hirano Y. Multi-modal imaging of the auditory-larynx motor network for voicing perception. Neuroimage 2022; 251:118981. [PMID: 35150835 DOI: 10.1016/j.neuroimage.2022.118981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 12/20/2021] [Accepted: 02/07/2022] [Indexed: 10/19/2022] Open
Abstract
Voicing is one of the most important characteristics of phonetic speech sounds. Despite its importance, voicing perception mechanisms remain largely unknown. To explore auditory-motor networks associated with voicing perception, we firstly examined the brain regions that showed common activities for voicing production and perception using functional magnetic resonance imaging. Results indicated that the auditory and speech motor areas were activated with the operculum parietale 4 (OP4) during both voicing production and perception. Secondly, we used a magnetoencephalography and examined the dynamical functional connectivity of the auditory-motor networks during a perceptual categorization task of /da/-/ta/ continuum stimuli varying in voice onset time (VOT) from 0 to 40 ms in 10 ms steps. Significant functional connectivities from the auditory cortical regions to the larynx motor area via OP4 were observed only when perceiving the stimulus with VOT 30 ms. In addition, regional activity analysis showed that the neural representation of VOT in the auditory cortical regions was mostly correlated with categorical perception of voicing but did not reflect the perception of stimulus with VOT 30 ms. We suggest that the larynx motor area, which is considered to play a crucial role in voicing production, contributes to categorical perception of voicing by complementing the temporal processing in the auditory cortical regions.
Collapse
Affiliation(s)
- Shunsuke Tamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan.
| | - Nobuyuki Hirose
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Takako Mitsudo
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | | | - Itta Nakamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Toshiaki Onitsuka
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Yoji Hirano
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan; Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, United States
| |
Collapse
|
9
|
Ylinen A, Wikman P, Leminen M, Alho K. Task-dependent cortical activations during selective attention to audiovisual speech. Brain Res 2022; 1775:147739. [PMID: 34843702 DOI: 10.1016/j.brainres.2021.147739] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/21/2021] [Accepted: 11/21/2021] [Indexed: 11/28/2022]
Abstract
Selective listening to speech depends on widespread networks of the brain, but how the involvement of different neural systems in speech processing is affected by factors such as the task performed by a listener and speech intelligibility remains poorly understood. We used functional magnetic resonance imaging to systematically examine the effects that performing different tasks has on neural activations during selective attention to continuous audiovisual speech in the presence of task-irrelevant speech. Participants viewed audiovisual dialogues and attended either to the semantic or the phonological content of speech, or ignored speech altogether and performed a visual control task. The tasks were factorially combined with good and poor auditory and visual speech qualities. Selective attention to speech engaged superior temporal regions and the left inferior frontal gyrus regardless of the task. Frontoparietal regions implicated in selective auditory attention to simple sounds (e.g., tones, syllables) were not engaged by the semantic task, suggesting that this network may not be not as crucial when attending to continuous speech. The medial orbitofrontal cortex, implicated in social cognition, was most activated by the semantic task. Activity levels during the phonological task in the left prefrontal, premotor, and secondary somatosensory regions had a distinct temporal profile as well as the highest overall activity, possibly relating to the role of the dorsal speech processing stream in sub-lexical processing. Our results demonstrate that the task type influences neural activations during selective attention to speech, and emphasize the importance of ecologically valid experimental designs.
Collapse
Affiliation(s)
- Artturi Ylinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Neuroscience, Georgetown University, Washington D.C., USA
| | - Miika Leminen
- Analytics and Data Services, HUS Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
10
|
Nuttall HE, Maegherman G, Devlin JT, Adank P. Speech motor facilitation is not affected by ageing but is modulated by task demands during speech perception. Neuropsychologia 2021; 166:108135. [PMID: 34958833 DOI: 10.1016/j.neuropsychologia.2021.108135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2020] [Revised: 11/26/2021] [Accepted: 12/21/2021] [Indexed: 10/19/2022]
Abstract
Motor areas for speech production activate during speech perception. Such activation may assist speech perception in challenging listening conditions. It is not known how ageing affects the recruitment of articulatory motor cortex during active speech perception. This study aimed to determine the effect of ageing on recruitment of speech motor cortex during speech perception. Single-pulse Transcranial Magnetic Stimulation (TMS) was applied to the lip area of left primary motor cortex (M1) to elicit lip Motor Evoked Potentials (MEPs). The M1 hand area was tested as a control site. TMS was applied whilst participants perceived syllables presented with noise (-10, 0, +10 dB SNRs) and without noise (clear). Participants detected and counted syllables throughout MEP recording. Twenty younger adult subjects (aged 18-25) and twenty older adult subjects (aged 65-80) participated in this study. Results indicated a significant interaction between age and noise condition in the syllable task. Specifically, older adults significantly misidentified syllables in the 0 dB SNR condition, and missed the syllables in the -10 dB SNR condition, relative to the clear condition. There were no differences between conditions for younger adults. There was a significant main effect of noise level on lip MEPs. Lip MEPs were unexpectedly inhibited in the 0 dB SNR condition relative to clear condition. There was no interaction between age group and noise condition. There was no main effect of noise or age group on control hand MEPs. These data suggest that speech-induced facilitation in articulatory motor cortex is abolished when performing a challenging secondary task, irrespective of age.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Lancaster University, Fylde College, Fylde Avenue, Lancaster, LA1 4YF, UK.
| | - Gwijde Maegherman
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London, WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London, WC1N 1PF, UK
| |
Collapse
|
11
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
12
|
Coffey BJ, Threlkeld ZD, Foulkes AS, Bodien YG, Edlow BL. Reemergence of the language network during recovery from severe traumatic brain injury: A pilot functional MRI study. Brain Inj 2021; 35:1552-1562. [PMID: 34546806 DOI: 10.1080/02699052.2021.1972455] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
PRIMARY OBJECTIVE We hypothesized that, in patients with acute severe traumatic brain injury (TBI) who recover basic language function, speech-evoked blood-oxygen-level-dependent (BOLD) functional MRI (fMRI) responses within the canonical language network increase over the first 6 months post-injury. RESEARCH DESIGN We conducted a prospective, longitudinal fMRI pilot study of adults with acute severe TBI admitted to the intensive care unit. We also enrolled age- and sex-matched healthy subjects. METHODS AND PROCEDURES We evaluated BOLD signal in bilateral superior temporal gyrus (STG) and inferior frontal gyrus (IFG) regions of interest acutely and approximately 6 months post-injury. Given evidence that regions outside the canonical language network contribute to language processing, we also performed exploratory whole-brain analyses. MAIN OUTCOMES AND RESULTS Of the 16 patients enrolled, eight returned for follow-up fMRI, all of whom recovered basic language function. We observed speech-evoked longitudinal BOLD increases in the left STG, but not in the right STG, right IFG, or left IFG. Whole-brain analysis revealed increases in the right supramarginal and middle temporal gyri but no differences between patients and healthy subjects (n = 16). CONCLUSION This pilot study suggests that, in patients with severe TBI who recover llanguage function, speech-evoked responses in bihemispheric language-processing cortex reemerge by 6 months post-injury.
Collapse
Affiliation(s)
- Brian J Coffey
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA.,Department of Neurology, University of Florida Health, University of Florida College of Medicine, Gainesville, Florida, USA
| | - Zachary D Threlkeld
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA.,Department of Neurology, Stanford University School of Medicine, Stanford, California, USA
| | - Andrea S Foulkes
- Department of Medicine, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA
| | - Yelena G Bodien
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA.,Department of Physical Medicine and Rehabilitation, Spaulding Rehabilitation Hospital, Charlestown, Massachusetts, USA
| | - Brian L Edlow
- Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, USA.,Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts, USA
| |
Collapse
|
13
|
Mao J, Liu L, Perkins K, Cao F. Poor reading is characterized by a more connected network with wrong hubs. BRAIN AND LANGUAGE 2021; 220:104983. [PMID: 34174464 DOI: 10.1016/j.bandl.2021.104983] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Revised: 06/01/2021] [Accepted: 06/15/2021] [Indexed: 06/13/2023]
Abstract
Using graph theory, we examined topological organization of the language network in Chinese children with poor reading during an auditory rhyming task and a visual spelling task, compared to reading-matched controls and age-matched controls. First, poor readers (PR) showed reduced clustering coefficient in the left inferior frontal gyrus (IFG) and higher nodal efficiency in the bilateral superior temporal gyri (STG) during the visual task, indicating a less functionally specialized cluster around the left IFG and stronger functional links between bilateral STGs and other regions. Furthermore, PR adopted additional right-hemispheric hubs in both tasks, which may explain increased global efficiency across both tasks and lower normalized characteristic shortest path length in the visual task for the PR. These results underscore deficits in the left IFG during visual word processing and conform previous findings about compensation in the right hemisphere in children with poor reading.
Collapse
Affiliation(s)
- Jiaqi Mao
- Department of Psychology, Sun Yat-Sen University, China
| | - Lanfang Liu
- Department of Psychology, Sun Yat-Sen University, China
| | - Kyle Perkins
- Department of Teaching and Learning, College of Arts, Sciences and Education, Florida International University, United States
| | - Fan Cao
- Department of Psychology, Sun Yat-Sen University, China.
| |
Collapse
|
14
|
Lin IF, Itahashi T, Kashino M, Kato N, Hashimoto RI. Brain activations while processing degraded speech in adults with autism spectrum disorder. Neuropsychologia 2021; 152:107750. [PMID: 33417913 DOI: 10.1016/j.neuropsychologia.2021.107750] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 12/14/2020] [Accepted: 12/31/2020] [Indexed: 11/17/2022]
Abstract
Individuals with autism spectrum disorder (ASD) are found to have difficulties in understanding speech in adverse conditions. In this study, we used noise-vocoded speech (VS) to investigate neural processing of degraded speech in individuals with ASD. We ran fMRI experiments in the ASD group and a typically developed control (TDC) group while they listened to clear speech (CS), VS, and spectrally rotated VS (SRVS), and they were requested to pay attention to the heard sentence and answer whether it was intelligible or not. The VS used in this experiment was spectrally degraded but still intelligible, but the SRVS was unintelligible. We recruited 21 right-handed adult males with ASD and 24 age-matched and right-handed male TDC participants for this experiment. Compared with the TDC group, we observed reduced functional connectivity (FC) between the left dorsal premotor cortex and left temporoparietal junction in the ASD group for the effect of task difficulty in speech processing, computed as VS-(CS + SRVS)/2. Furthermore, the observed reduced FC was negatively correlated with their Autism-Spectrum Quotient scores. This observation supports our hypothesis that the disrupted dorsal stream for attentive process of degraded speech in individuals with ASD might be related to their difficulty in understanding speech in adverse conditions.
Collapse
Affiliation(s)
- I-Fan Lin
- Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0124, Japan; Department of Medicine, Taipei Medical University, Taipei, Taiwan, 11031; Department of Occupational Medicine, Shuang Ho Hospital, New Taipei City, Taiwan, 23561.
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan
| | - Makio Kashino
- Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0124, Japan; School of Engineering, Tokyo Institute of Technology, Yokohama, 226-8503, Japan; Graduate School of Education, University of Tokyo, Tokyo, 113-0033, Japan
| | - Nobumasa Kato
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan
| | - Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan; Department of Language Sciences, Tokyo Metropolitan University, Tokyo, 192-0364, Japan.
| |
Collapse
|
15
|
Tiksnadi A, Murakami T, Wiratman W, Matsumoto H, Ugawa Y. Direct comparison of efficacy of the motor cortical plasticity induction and the interindividual variability between TBS and QPS. Brain Stimul 2020; 13:1824-1833. [PMID: 33144269 DOI: 10.1016/j.brs.2020.10.014] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2019] [Revised: 10/04/2020] [Accepted: 10/23/2020] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND Theta burst stimulation (TBS) and quadripulse stimulation (QPS) are known to induce synaptic plasticity in humans. There have been no head-to-head comparisons of the efficacy and variability between TBS and QPS. OBJECTIVE To compare the efficacy and interindividual variability between the original TBS and QPS protocols. We hypothesized that QPS would be more effective and less variable than TBS. METHODS Forty-six healthy subjects participated in this study. Thirty subjects participated in the main comparison experiment, and the other sixteen subjects participated in the experiment to obtain natural variation in motor-evoked potentials. The facilitatory effects were compared between intermittent TBS (iTBS) and QPS5, and the inhibitory effects were compared between continuous TBS (cTBS) and QPS50. The motor-evoked potential amplitudes elicited by transcranial magnetic stimulation over the primary motor cortex were measured before the intervention and every 5 min after the intervention for 1 h. To investigate the interindividual variability, the responder/nonresponder/opposite-responder rates were also analyzed. RESULTS The facilitatory effects of QPS5 were greater than those of iTBS, and the inhibitory effects of QPS50 were much stronger than those of cTBS. The responder rate of QPS was significantly higher than that of TBS. QPS had a smaller number of opposite responders than TBS. CONCLUSION QPS is more effective and stable for synaptic plasticity induction than TBS.
Collapse
Affiliation(s)
- Amanda Tiksnadi
- Department of Neurology, Fukushima Medical University, Fukushima, Japan; Department of Neurology, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia.
| | - Takenobu Murakami
- Department of Neurology, Fukushima Medical University, Fukushima, Japan; Department of Neurology, Tottori Prefectural Kousei Hospital, Tottori, Japan
| | - Winnugroho Wiratman
- Department of Neurology, Fukushima Medical University, Fukushima, Japan; Department of Neurology, Faculty of Medicine, Universitas Indonesia, Cipto Mangunkusumo Hospital, Jakarta, Indonesia
| | | | - Yoshikazu Ugawa
- Department of Neurology, Fukushima Medical University, Fukushima, Japan; Department of Human Neurophysiology, Fukushima Medical University, Fukushima, Japan
| |
Collapse
|
16
|
Trébuchon A, Liégeois-Chauvel C, Gonzalez-Martinez JA, Alario FX. Contributions of electrophysiology for identifying cortical language systems in patients with epilepsy. Epilepsy Behav 2020; 112:107407. [PMID: 33181892 DOI: 10.1016/j.yebeh.2020.107407] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Revised: 08/10/2020] [Accepted: 08/10/2020] [Indexed: 11/26/2022]
Abstract
A crucial element of the surgical treatment of medically refractory epilepsy is to delineate cortical areas that must be spared in order to avoid clinically relevant neurological and neuropsychological deficits postoperatively. For each patient, this typically necessitates determining the language lateralization between hemispheres and language localization within hemisphere. Understanding cortical language systems is complicated by two primary challenges: the extent of the neural tissue involved and the substantial variability across individuals, especially in pathological populations. We review the contributions made through the study of electrophysiological activity to address these challenges. These contributions are based on the techniques of magnetoencephalography (MEG), intracerebral recordings, electrical-cortical stimulation (ECS), and the electrovideo analyses of seizures and their semiology. We highlight why no single modality alone is adequate to identify cortical language systems and suggest avenues for improving current practice.
Collapse
Affiliation(s)
- Agnès Trébuchon
- Aix-Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France
| | - Catherine Liégeois-Chauvel
- Aix-Marseille Univ, INSERM, INS, Inst Neurosci Syst, Marseille, France; Department of Neurological Surgery, School of Medicine, University of Pittsburgh (PA), USA
| | | | - F-Xavier Alario
- Department of Neurological Surgery, School of Medicine, University of Pittsburgh (PA), USA; Aix-Marseille Univ, CNRS, LPC, Marseille, France.
| |
Collapse
|
17
|
Saltzman DI, Myers EB. Neural Representation of Articulable and Inarticulable Novel Sound Contrasts: The Role of the Dorsal Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:339-364. [PMID: 35784619 PMCID: PMC9248853 DOI: 10.1162/nol_a_00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 05/23/2020] [Indexed: 06/15/2023]
Abstract
The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.
Collapse
|
18
|
Dricu M, Frühholz S. A neurocognitive model of perceptual decision-making on emotional signals. Hum Brain Mapp 2020; 41:1532-1556. [PMID: 31868310 PMCID: PMC7267943 DOI: 10.1002/hbm.24893] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2019] [Revised: 11/18/2019] [Accepted: 11/29/2019] [Indexed: 01/09/2023] Open
Abstract
Humans make various kinds of decisions about which emotions they perceive from others. Although it might seem like a split-second phenomenon, deliberating over which emotions we perceive unfolds across several stages of decisional processing. Neurocognitive models of general perception postulate that our brain first extracts sensory information about the world then integrates these data into a percept and lastly interprets it. The aim of the present study was to build an evidence-based neurocognitive model of perceptual decision-making on others' emotions. We conducted a series of meta-analyses of neuroimaging data spanning 30 years on the explicit evaluations of others' emotional expressions. We find that emotion perception is rather an umbrella term for various perception paradigms, each with distinct neural structures that underline task-related cognitive demands. Furthermore, the left amygdala was responsive across all classes of decisional paradigms, regardless of task-related demands. Based on these observations, we propose a neurocognitive model that outlines the information flow in the brain needed for a successful evaluation of and decisions on other individuals' emotions. HIGHLIGHTS: Emotion classification involves heterogeneous perception and decision-making tasks Decision-making processes on emotions rarely covered by existing emotions theories We propose an evidence-based neuro-cognitive model of decision-making on emotions Bilateral brain processes for nonverbal decisions, left brain processes for verbal decisions Left amygdala involved in any kind of decision on emotions.
Collapse
Affiliation(s)
- Mihai Dricu
- Department of PsychologyUniversity of BernBernSwitzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of PsychologyUniversity of ZurichZurichSwitzerland
- Neuroscience Center Zurich (ZNZ)University of Zurich and ETH ZurichZurichSwitzerland
- Center for Integrative Human Physiology (ZIHP)University of ZurichZurichSwitzerland
| |
Collapse
|
19
|
Primary motor cortex and phonological recoding: A TMS-EMG study. Neuropsychologia 2020; 139:107368. [PMID: 32014451 DOI: 10.1016/j.neuropsychologia.2020.107368] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2019] [Revised: 12/18/2019] [Accepted: 01/31/2020] [Indexed: 01/09/2023]
Abstract
Since the 1960s, evidence from healthy participants and brain-damaged patients, neuroimaging and non-invasive brain stimulation studies has specified the neurofunctional architecture of the short-term memory (STM) system, supporting the temporary retention of a limited amount of verbal material. Auditory-verbal, later termed Phonological (Ph) STM or Phonological Loop, comprises two sub-components: i) the main storage system, the Phonological Short-Term Store (PhSTS), to which auditory verbal stimuli have direct access and where phonologically coded information is retained for a few seconds; ii) a Rehearsal Process (REH), which actively maintains the trace held in the PhSTS, preventing its decay and conveys visual verbal material to the PhSTS, after the process of Phonological Recoding (PhREC, or Grapheme-to-Phoneme Conversion) has taken place. PhREC converts visuo-verbal graphemic representations into phonological ones. The neural correlates of PhSTM include two discrete regions in the left hemisphere: the temporo-parietal junction (PhSTS) and the inferior frontal gyrus in the premotor cortex (REH). The neural basis of PhREC has been much less investigated. A few single case studies of patients made anarthric by focal or degenerative cortical damage, who show a pattern of impairment indicative of a deficit of PhREC, sparing the REH process, suggest that the primary motor cortex (M1) might be involved. To test this hypothesis in healthy participants with a neurophysiological approach, we measured the corticospinal excitability of M1, by means of Transcranial Magnetic Stimulation (TMS)-induced Motor Evoked Potentials (MEPs), during the execution of phonological judgements on auditorily vs. visually presented words (Experiment #1). Crucially, these phonological tasks involve REH, while PhREC is required only with visual presentation. Results show MEPs with larger amplitude when stimuli are presented visually. Task difficulty does not account for this difference and the result is specific for linguistic stimuli, indeed visual and auditory stimuli that cannot be verbalized lead to different behavioral and neurophysiological patterns (Experiment #2). The increase of corticospinal excitability when words are presented visually can be then interpreted as an indication of the involvement of M1 in PhREC. The present findings elucidate the neural correlates of PhREC, suggesting an involvement of the peripheral motor system in its activity.
Collapse
|
20
|
Grabski K, Sato M. Adaptive phonemic coding in the listening and speaking brain. Neuropsychologia 2020; 136:107267. [DOI: 10.1016/j.neuropsychologia.2019.107267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 10/23/2019] [Accepted: 11/15/2019] [Indexed: 10/25/2022]
|
21
|
Kowialiewski B, Van Calster L, Attout L, Phillips C, Majerus S. Neural Patterns in Linguistic Cortices Discriminate the Content of Verbal Working Memory. Cereb Cortex 2019; 30:2997-3014. [PMID: 31813984 DOI: 10.1093/cercor/bhz290] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 09/16/2019] [Accepted: 06/17/2019] [Indexed: 01/11/2023] Open
Abstract
An influential theoretical account of working memory (WM) considers that WM is based on direct activation of long-term memory knowledge. While there is empirical support for this position in the visual WM domain, direct evidence is scarce in the verbal WM domain. This question is critical for models of verbal WM, as the question of whether short-term maintenance of verbal information relies on direct activation within the long-term linguistic knowledge base or not is still debated. In this study, we examined the extent to which short-term maintenance of lexico-semantic knowledge relies on neural activation patterns in linguistic cortices, and this by using a fast encoding running span task for word and nonword stimuli minimizing strategic encoding mechanisms. Multivariate analyses showed specific neural patterns for the encoding and maintenance of word versus nonword stimuli. These patterns were not detectable anymore when participants were instructed to stop maintaining the memoranda. The patterns involved specific regions within the dorsal and ventral pathways, which are considered to support phonological and semantic processing to various degrees. This study provides novel evidence for a role of linguistic cortices in the representation of long-term memory linguistic knowledge during WM processing.
Collapse
Affiliation(s)
- Benjamin Kowialiewski
- University of Liège, Liège, Belgium.,Fund for Scientific Research-F.R.S.-FNRS, Brussels, Belgium
| | - Laurens Van Calster
- University of Liège, Liège, Belgium.,University of Geneva, Geneva, Switzerland
| | | | - Christophe Phillips
- University of Liège, Liège, Belgium.,Fund for Scientific Research-F.R.S.-FNRS, Brussels, Belgium
| | - Steve Majerus
- University of Liège, Liège, Belgium.,Fund for Scientific Research-F.R.S.-FNRS, Brussels, Belgium
| |
Collapse
|
22
|
Longcamp M, Hupé JM, Ruiz M, Vayssière N, Sato M. Shared premotor activity in spoken and written communication. BRAIN AND LANGUAGE 2019; 199:104694. [PMID: 31586790 DOI: 10.1016/j.bandl.2019.104694] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/07/2018] [Revised: 09/12/2019] [Accepted: 09/15/2019] [Indexed: 06/10/2023]
Abstract
The aim of the present study was to uncover a possible common neural organizing principle in spoken and written communication, through the coupling of perceptual and motor representations. In order to identify possible shared neural substrates for processing the basic units of spoken and written language, a sparse sampling fMRI acquisition protocol was performed on the same subjects in two experimental sessions with similar sets of letters being read and written and of phonemes being heard and orally produced. We found evidence of common premotor regions activated in spoken and written language, both in perception and in production. The location of those brain regions was confined to the left lateral and medial frontal cortices, at locations corresponding to the premotor cortex, inferior frontal cortex and supplementary motor area. Interestingly, the speaking and writing tasks also appeared to be controlled by largely overlapping networks, possibly indicating some domain general cognitive processing. Finally, the spatial distribution of individual activation peaks further showed more dorsal and more left-lateralized premotor activations in written than in spoken language.
Collapse
Affiliation(s)
| | - Jean-Michel Hupé
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Mathieu Ruiz
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France
| | - Nathalie Vayssière
- CNRS, Université de Toulouse Paul Sabatier, CerCo, Toulouse, France; Toulouse Mind and Brain Institute, France
| | - Marc Sato
- CNRS, Aix-Marseille Univ, LPL, Aix-en-Provence, France
| |
Collapse
|
23
|
Pflug A, Gompf F, Muthuraman M, Groppa S, Kell CA. Differential contributions of the two human cerebral hemispheres to action timing. eLife 2019; 8:e48404. [PMID: 31697640 PMCID: PMC6837842 DOI: 10.7554/elife.48404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 10/08/2019] [Indexed: 01/22/2023] Open
Abstract
Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
Collapse
Affiliation(s)
- Anja Pflug
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Florian Gompf
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Muthuraman Muthuraman
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Sergiu Groppa
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Christian Alexander Kell
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| |
Collapse
|
24
|
Gehrig J, Michalareas G, Forster MT, Lei J, Hok P, Laufs H, Senft C, Seifert V, Schoffelen JM, Hanslmayr S, Kell CA. Low-Frequency Oscillations Code Speech during Verbal Working Memory. J Neurosci 2019; 39:6498-6512. [PMID: 31196933 PMCID: PMC6697399 DOI: 10.1523/jneurosci.0018-19.2019] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Revised: 05/09/2019] [Accepted: 05/10/2019] [Indexed: 11/21/2022] Open
Abstract
The way the human brain represents speech in memory is still unknown. An obvious characteristic of speech is its evolvement over time. During speech processing, neural oscillations are modulated by the temporal properties of the acoustic speech signal, but also acquired knowledge on the temporal structure of language influences speech perception-related brain activity. This suggests that speech could be represented in the temporal domain, a form of representation that the brain also uses to encode autobiographic memories. Empirical evidence for such a memory code is lacking. We investigated the nature of speech memory representations using direct cortical recordings in the left perisylvian cortex during delayed sentence reproduction in female and male patients undergoing awake tumor surgery. Our results reveal that the brain endogenously represents speech in the temporal domain. Temporal pattern similarity analyses revealed that the phase of frontotemporal low-frequency oscillations, primarily in the beta range, represents sentence identity in working memory. The positive relationship between beta power during working memory and task performance suggests that working memory representations benefit from increased phase separation.SIGNIFICANCE STATEMENT Memory is an endogenous source of information based on experience. While neural oscillations encode autobiographic memories in the temporal domain, little is known on their contribution to memory representations of human speech. Our electrocortical recordings in participants who maintain sentences in memory identify the phase of left frontotemporal beta oscillations as the most prominent information carrier of sentence identity. These observations provide evidence for a theoretical model on speech memory representations and explain why interfering with beta oscillations in the left inferior frontal cortex diminishes verbal working memory capacity. The lack of sentence identity coding at the syllabic rate suggests that sentences are represented in memory in a more abstract form compared with speech coding during speech perception and production.
Collapse
Affiliation(s)
- Johannes Gehrig
- Department of Neurology, Goethe University, 60528 Frankfurt, Germany
| | | | | | - Juan Lei
- Department of Neurology, Goethe University, 60528 Frankfurt, Germany
- Institute for Cell Biology and Neuroscience, Goethe University, 60438 Frankfurt, Germany
| | - Pavel Hok
- Department of Neurology, Goethe University, 60528 Frankfurt, Germany
- Department of Neurology, Palacky University and University Hospital Olomouc, 77147 Olomouc, Czech Republic
| | - Helmut Laufs
- Department of Neurology, Goethe University, 60528 Frankfurt, Germany
- Department of Neurology, Christian-Albrechts-University, 24105 Kiel, Germany
| | - Christian Senft
- Department of Neurosurgery, Goethe University, 60528 Frankfurt, Germany
| | - Volker Seifert
- Department of Neurosurgery, Goethe University, 60528 Frankfurt, Germany
| | - Jan-Mathijs Schoffelen
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, 6525 HR Nijmegen, The Netherlands, and
| | - Simon Hanslmayr
- School of Psychology at University of Birmingham, B15 2TT Birmingham, United Kingdom
| | - Christian A Kell
- Department of Neurology, Goethe University, 60528 Frankfurt, Germany,
| |
Collapse
|
25
|
Kral A, Dorman MF, Wilson BS. Neuronal Development of Hearing and Language: Cochlear Implants and Critical Periods. Annu Rev Neurosci 2019; 42:47-65. [DOI: 10.1146/annurev-neuro-080317-061513] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into ( a) minimal representations at the periphery for speech reception, ( b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, ( c) the developmental neuroscience of language and hearing, and ( d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical University, 30625 Hannover, Germany
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Michael F. Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona 85287, USA
| | - Blake S. Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Pratt School of Engineering, Duke University, Durham, North Carolina 27708, USA
| |
Collapse
|
26
|
Dual-site high-density 4Hz transcranial alternating current stimulation applied over auditory and motor cortical speech areas does not influence auditory-motor mapping. Brain Stimul 2019; 12:775-777. [DOI: 10.1016/j.brs.2019.01.007] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2019] [Accepted: 01/14/2019] [Indexed: 11/20/2022] Open
|
27
|
Buchsbaum BR, D'Esposito M. A sensorimotor view of verbal working memory. Cortex 2019; 112:134-148. [DOI: 10.1016/j.cortex.2018.11.010] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2018] [Revised: 10/09/2018] [Accepted: 11/11/2018] [Indexed: 12/16/2022]
|
28
|
Ogawa R, Kagitani-Shimono K, Matsuzaki J, Tanigawa J, Hanaie R, Yamamoto T, Tominaga K, Hirata M, Mohri I, Taniike M. Abnormal cortical activation during silent reading in adolescents with autism spectrum disorder. Brain Dev 2019; 41:234-244. [PMID: 30448302 DOI: 10.1016/j.braindev.2018.10.013] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/27/2018] [Revised: 09/15/2018] [Accepted: 10/25/2018] [Indexed: 01/05/2023]
Abstract
OBJECTIVE Autism spectrum disorder (ASD) is a developmental disorder characterized by communication deficits and social difficulties, and individuals with ASD frequently exhibit varied levels of language abilities. However, the neurophysiological mechanisms underlying their language deficits remain unclear. To gain insight into the neurophysiological mechanisms of receptive language deficits, we assessed cortical activation patterns in adolescents with ASD during silent word-reading. METHODS We used magnetoencephalography to measure cortical activation during a silent word-reading task in 14 adolescent boys with high-functioning ASD and 17 adolescent boys with typical development (TD). RESULTS Compared with participants with TD, those with ASD exhibited significantly decreased cortical activation in the left middle temporal gyrus, left temporoparietal junction, bilateral superior temporal gyrus, left posterior insula, and right occipitotemporal gyrus, and increased activation in the right anterior insula. Participants with ASD also exhibited a lack of left-lateralization in the central sulcus and abnormal right-lateralization in the anterior insula area. Furthermore, in participants with ASD, we found that abnormal activation of the right central sulcus correlated significantly with lower visual word comprehension scores, and that decreased activation of the right anterior insula correlated significantly with the severity of social interaction difficulties. CONCLUSION Our findings suggest that atypical cortical activation and lateralization in the temporal-frontal area, which is associated with higher-order language processing functions, such as semantic analysis, may play a crucial role in visual word comprehension and social interaction difficulties in adolescents with ASD.
Collapse
Affiliation(s)
- Rei Ogawa
- United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Kuriko Kagitani-Shimono
- United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan.
| | - Junko Matsuzaki
- United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Junpei Tanigawa
- Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Ryuzo Hanaie
- United Graduate School of Child Development, Osaka University, Osaka, Japan
| | - Tomoka Yamamoto
- Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Koji Tominaga
- United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Masayuki Hirata
- Department of Neurosurgery, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Ikuko Mohri
- United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| | - Masako Taniike
- United Graduate School of Child Development, Osaka University, Osaka, Japan; Molecular Research Center for Children's Mental Development, Osaka University Graduate School of Medicine, Osaka, Japan; Department of Pediatrics, Osaka University Graduate School of Medicine, Osaka, Japan
| |
Collapse
|
29
|
Power and phase coherence in sensorimotor mu and temporal lobe alpha components during covert and overt syllable production. Exp Brain Res 2018; 237:705-721. [DOI: 10.1007/s00221-018-5447-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 11/30/2018] [Indexed: 10/27/2022]
|
30
|
Liebenthal E, Möttönen R. An interactive model of auditory-motor speech perception. BRAIN AND LANGUAGE 2018; 187:33-40. [PMID: 29268943 PMCID: PMC6005717 DOI: 10.1016/j.bandl.2017.12.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 12/02/2017] [Indexed: 05/30/2023]
Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham & Women's Hospital, Harvard Medical School, Boston, USA.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
31
|
Thornton D, Harkrider AW, Jenson D, Saltuklaroglu T. Sensorimotor activity measured via oscillations of EEG mu rhythms in speech and non-speech discrimination tasks with and without segmentation demands. BRAIN AND LANGUAGE 2018; 187:62-73. [PMID: 28431691 DOI: 10.1016/j.bandl.2017.03.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 01/24/2017] [Accepted: 03/31/2017] [Indexed: 06/07/2023]
Abstract
Better understanding of the role of sensorimotor processing in speech and non-speech segmentation can be achieved with more temporally precise measures. Twenty adults made same/different discriminations of speech and non-speech stimuli pairs, with and without segmentation demands. Independent component analysis of 64-channel EEG data revealed clear sensorimotor mu components, with characteristic alpha and beta peaks, localized to premotor regions in 70% of participants.Time-frequency analyses of mu components from accurate trials showed that (1) segmentation tasks elicited greater event-related synchronization immediately following offset of the first stimulus, suggestive of inhibitory activity; (2) strong late event-related desynchronization in all conditions, suggesting that working memory/covert replay contributed substantially to sensorimotor activity in all conditions; (3) stronger beta desynchronization in speech versus non-speech stimuli during stimulus presentation, suggesting stronger auditory-motor transforms for speech versus non-speech stimuli. Findings support the continued use of oscillatory approaches for helping understand segmentation and other cognitive tasks.
Collapse
Affiliation(s)
- David Thornton
- University of Tennessee Health Science Center, United States.
| | | | - David Jenson
- University of Tennessee Health Science Center, United States
| | | |
Collapse
|
32
|
Nuttall HE, Kennedy-Higgins D, Devlin JT, Adank P. Modulation of intra- and inter-hemispheric connectivity between primary and premotor cortex during speech perception. BRAIN AND LANGUAGE 2018; 187:74-82. [PMID: 29397191 DOI: 10.1016/j.bandl.2017.12.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 10/28/2017] [Accepted: 12/02/2017] [Indexed: 06/07/2023]
Abstract
Primary motor (M1) areas for speech production activate during speechperception. It has been suggested that such activation may be dependent upon modulatory inputs from premotor cortex (PMv). If and how PMv differentially modulates M1 activity during perception of speech that is easy or challenging to understand, however, is unclear. This study aimed to test the link between PMv and M1 during challenging speech perception in two experiments. The first experiment investigated intra-hemispheric connectivity between left hemisphere PMv and left M1 lip area during comprehension of speech under clear and distorted listening conditions. Continuous theta burst stimulation (cTBS) was applied to left PMv in eighteen participants (aged 18-35). Post-cTBS, participants performed a sentence verification task on distorted (imprecisely articulated), and clear speech, whilst also undergoing stimulation of the lip representation in the left M1 to elicit motor evoked potentials (MEPs). In a second, separate experiment, we investigated the role of inter-hemispheric connectivity between right hemisphere PMv and left hemisphere M1 lip area. Dual-coil transcranial magnetic stimulation was applied to right PMv and left M1 lip in fifteen participants (aged 18-35). Results indicated that disruption of PMv during speech perception affects comprehension of distorted speech specifically. Furthermore, our data suggest that listening to distorted speech modulates the balance of intra- and inter-hemispheric interactions, with a larger sensorimotor network implicated during comprehension of distorted speech than when speech perception is optimal. The present results further understanding of PMv-M1 interactions during auditory-motor integration.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Fylde College, Lancaster University, Lancaster LA1 4YF, UK; Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Dan Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
33
|
The Motor Network Reduces Multisensory Illusory Perception. J Neurosci 2018; 38:9679-9688. [PMID: 30249803 DOI: 10.1523/jneurosci.3650-17.2018] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2017] [Revised: 08/28/2018] [Accepted: 09/12/2018] [Indexed: 11/21/2022] Open
Abstract
Observing mouth movements has strikingly effects on the perception of speech. Any mismatch between sound and mouth movements will result in listeners perceiving illusory consonants (McGurk effect), whereas matching mouth movements assist with the correct recognition of speech sounds. Recent neuroimaging studies have yielded evidence that the motor areas are involved in speech processing, yet their contributions to multisensory illusion remain unclear. Using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) in an event-related design, we aimed to identify the functional roles of the motor network in the occurrence of multisensory illusion in female and male brains. fMRI showed bilateral activation of the inferior frontal gyrus (IFG) in audiovisually incongruent trials. Activity in the left IFG was negatively correlated with occurrence of the McGurk effect. The effective connectivity between the left IFG and the bilateral precentral gyri was stronger in incongruent than in congruent trials. The McGurk effect was reduced in incongruent trials by applying single-pulse TMS to motor cortex (M1) lip areas, indicating that TMS facilitates the left IFG-precentral motor network to reduce the McGurk effect. TMS of the M1 lip areas was effective in reducing the McGurk effect within the specific temporal range from 100 ms before to 200 ms after the auditory onset, and TMS of the M1 foot area did not influence the McGurk effect, suggesting topographical specificity. These results provide direct evidence that the motor network makes specific temporal and topographical contributions to the processing of multisensory integration of speech to avoid illusion.SIGNIFICANCE STATEMENT The human motor network, including the inferior frontal gyrus and primary motor cortex lip area, appears to be involved in speech perception, but the functional contribution to the McGurk effect is unknown. Functional magnetic resonance imaging revealed that activity in these areas of the motor network increased when the audiovisual stimuli were incongruent, and that the increased activity was negatively correlated with perception of the McGurk effect. Furthermore, applying transcranial magnetic stimulation to the motor areas reduced the McGurk effect. These two observations provide evidence that the motor network contributes to the avoidance of multisensory illusory perception.
Collapse
|
34
|
Panouillères MTN, Möttönen R. Decline of auditory-motor speech processing in older adults with hearing loss. Neurobiol Aging 2018; 72:89-97. [PMID: 30240945 DOI: 10.1016/j.neurobiolaging.2018.07.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 07/20/2018] [Accepted: 07/20/2018] [Indexed: 10/28/2022]
Abstract
Older adults often experience difficulties in understanding speech, partly because of age-related hearing loss (HL). In young adults, activity of the left articulatory motor cortex is enhanced and it interacts with the auditory cortex via the left-hemispheric dorsal stream during speech processing. Little is known about the effect of aging and age-related HL on this auditory-motor interaction and speech processing in the articulatory motor cortex. It has been proposed that upregulation of the motor system during speech processing could compensate for HL and auditory processing deficits in older adults. Alternatively, age-related auditory deficits could reduce and distort the input from the auditory cortex to the articulatory motor cortex, suppressing recruitment of the motor system during listening to speech. The aim of the present study was to investigate the effects of aging and age-related HL on the excitability of the tongue motor cortex during listening to spoken sentences using transcranial magnetic stimulation and electromyography. Our results show that the excitability of the tongue motor cortex was facilitated during listening to speech in young and older adults with normal hearing. This facilitation was significantly reduced in older adults with HL. These findings suggest a decline of auditory-motor processing of speech in adults with age-related HL.
Collapse
Affiliation(s)
- Muriel T N Panouillères
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Sports Sciences and Human Movement, CIAMS, Université Paris-Sud, Université Paris-Saclay, Orsay, France; UFR Collegium Sciences et Techniques, CIAMS, Université d'Orléans, Orléans, France.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
35
|
Tanigawa J, Kagitani-Shimono K, Matsuzaki J, Ogawa R, Hanaie R, Yamamoto T, Tominaga K, Nabatame S, Mohri I, Taniike M, Ozono K. Atypical auditory language processing in adolescents with autism spectrum disorder. Clin Neurophysiol 2018; 129:2029-2037. [PMID: 29934264 DOI: 10.1016/j.clinph.2018.05.014] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2017] [Revised: 05/01/2018] [Accepted: 05/08/2018] [Indexed: 12/24/2022]
Abstract
OBJECTIVE Individuals with autism spectrum disorder (ASD) often show characteristic differences in auditory processing. To clarify the mechanisms underlying communication impairment in ASD, we examined auditory language processing with both anatomical and functional methods. METHODS We assessed the language abilities of adolescents with ASD and typically developing (TD) adolescents, and analyzed the surface-based morphometric structure between the groups using magnetic resonance imaging. Furthermore, we measured cortical responses to an auditory word comprehension task with magnetoencephalography and performed network-based statistics using the phase locking values. RESULTS We observed no structural differences between the groups. However, the volume of the left ventral central sulcus (vCS) showed a significant correlation with linguistic scores in ASD. Moreover, adolescents with ASD showed weaker cortical activation in the left vCS and superior temporal sulcus. Furthermore, these regions showed differential correlations with linguistic scores between the groups. Moreover, the ASD group had an atypical gamma band (25-40 Hz) network centered on the left vCS. CONCLUSIONS Adolescents with ASD showed atypical responses on the auditory word comprehension task and functional brain differences. SIGNIFICANCE Our results suggest that phonological processing and gamma band cortical activity play a critical role in auditory language processing-related pathophysiology in adolescents with ASD.
Collapse
Affiliation(s)
- Junpei Tanigawa
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Kuriko Kagitani-Shimono
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Division of Developmental Neuroscience, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Junko Matsuzaki
- Molecular Research Center for Children's Mental Development, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Rei Ogawa
- Division of Developmental Neuroscience, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Ryuzo Hanaie
- Molecular Research Center for Children's Mental Development, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Tomoka Yamamoto
- Molecular Research Center for Children's Mental Development, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Koji Tominaga
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Division of Developmental Neuroscience, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Shin Nabatame
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Ikuko Mohri
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Division of Developmental Neuroscience, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Molecular Research Center for Children's Mental Development, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Masako Taniike
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Division of Developmental Neuroscience, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan; Molecular Research Center for Children's Mental Development, United Graduate School of Child Development, Osaka University, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| | - Keiichi Ozono
- Department of Pediatrics, Osaka University Graduate School of Medicine, 2-2 Yamadaoka, Suita, Osaka 565-0871, Japan.
| |
Collapse
|
36
|
Dietrich S, Hertrich I, Müller-Dahlhaus F, Ackermann H, Belardinelli P, Desideri D, Seibold VC, Ziemann U. Reduced Performance During a Sentence Repetition Task by Continuous Theta-Burst Magnetic Stimulation of the Pre-supplementary Motor Area. Front Neurosci 2018; 12:361. [PMID: 29896086 PMCID: PMC5987029 DOI: 10.3389/fnins.2018.00361] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Accepted: 05/09/2018] [Indexed: 11/23/2022] Open
Abstract
The pre-supplementary motor area (pre-SMA) is engaged in speech comprehension under difficult circumstances such as poor acoustic signal quality or time-critical conditions. Previous studies found that left pre-SMA is activated when subjects listen to accelerated speech. Here, the functional role of pre-SMA was tested for accelerated speech comprehension by inducing a transient “virtual lesion” using continuous theta-burst stimulation (cTBS). Participants were tested (1) prior to (pre-baseline), (2) 10 min after (test condition for the cTBS effect), and (3) 60 min after stimulation (post-baseline) using a sentence repetition task (formant-synthesized at rates of 8, 10, 12, 14, and 16 syllables/s). Speech comprehension was quantified by the percentage of correctly reproduced speech material. For high speech rates, subjects showed decreased performance after cTBS of pre-SMA. Regarding the error pattern, the number of incorrect words without any semantic or phonological similarity to the target context increased, while related words decreased. Thus, the transient impairment of pre-SMA seems to affect its inhibitory function that normally eliminates erroneous speech material prior to speaking or, in case of perception, prior to encoding into a semantically/pragmatically meaningful message.
Collapse
Affiliation(s)
- Susanne Dietrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ingo Hertrich
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Florian Müller-Dahlhaus
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany.,Department of Psychiatry and Psychotherapy, University Medical Center of the Johannes Gutenberg University, University of Mainz, Mainz, Germany
| | - Hermann Ackermann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Paolo Belardinelli
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Debora Desideri
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Verena C Seibold
- Department of Psychology, Evolutionary Cognition, University of Tübingen, Tübingen, Germany
| | - Ulf Ziemann
- Department of Neurology & Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
37
|
Moseley RL, Pulvermüller F. What can autism teach us about the role of sensorimotor systems in higher cognition? New clues from studies on language, action semantics, and abstract emotional concept processing. Cortex 2018; 100:149-190. [DOI: 10.1016/j.cortex.2017.11.019] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Revised: 05/17/2017] [Accepted: 11/21/2017] [Indexed: 01/08/2023]
|
38
|
Rampinini AC, Handjaras G, Leo A, Cecchetti L, Ricciardi E, Marotta G, Pietrini P. Functional and spatial segregation within the inferior frontal and superior temporal cortices during listening, articulation imagery, and production of vowels. Sci Rep 2017; 7:17029. [PMID: 29208951 PMCID: PMC5717247 DOI: 10.1038/s41598-017-17314-0] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2017] [Accepted: 11/24/2017] [Indexed: 11/09/2022] Open
Abstract
Classical models of language localize speech perception in the left superior temporal and production in the inferior frontal cortex. Nonetheless, neuropsychological, structural and functional studies have questioned such subdivision, suggesting an interwoven organization of the speech function within these cortices. We tested whether sub-regions within frontal and temporal speech-related areas retain specific phonological representations during both perception and production. Using functional magnetic resonance imaging and multivoxel pattern analysis, we showed functional and spatial segregation across the left fronto-temporal cortex during listening, imagery and production of vowels. In accordance with classical models of language and evidence from functional studies, the inferior frontal and superior temporal cortices discriminated among perceived and produced vowels respectively, also engaging in the non-classical, alternative function - i.e. perception in the inferior frontal and production in the superior temporal cortex. Crucially, though, contiguous and non-overlapping sub-regions within these hubs performed either the classical or non-classical function, the latter also representing non-linguistic sounds (i.e., pure tones). Extending previous results and in line with integration theories, our findings not only demonstrate that sensitivity to speech listening exists in production-related regions and vice versa, but they also suggest that the nature of such interwoven organisation is built upon low-level perception.
Collapse
Affiliation(s)
| | | | - Andrea Leo
- IMT School for Advanced Studies, Lucca, 55100, Italy
| | | | | | - Giovanna Marotta
- Department of Philology, Literature and Linguistics, University of Pisa, Pisa, 56100, Italy
| | | |
Collapse
|
39
|
The cortical dynamics of speaking: Lexical and phonological knowledge simultaneously recruit the frontal and temporal cortex within 200 ms. Neuroimage 2017; 163:206-219. [PMID: 28943413 DOI: 10.1016/j.neuroimage.2017.09.041] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 08/20/2017] [Accepted: 09/20/2017] [Indexed: 11/27/2022] Open
Abstract
Language production models typically assume that retrieving a word for articulation is a sequential process with substantial functional delays between conceptual, lexical, phonological and motor processing, respectively. Nevertheless, explicit evidence contrasting the spatiotemporal dynamics between different word production components is scarce. Here, using anatomically constrained magnetoencephalography during overt meaningful speech production, we explore the speed with which lexico-semantic versus acoustic-articulatory information of a to-be-uttered word become first neurophysiologically manifest in the cerebral cortex. We demonstrate early modulations of brain activity by the lexical frequency of a word in the temporal cortex and the left inferior frontal gyrus, simultaneously with activity in the motor and the posterior superior temporal cortex reflecting articulatory-acoustic phonological features (+LABIAL vs. +CORONAL) of the word-initial speech sounds (e.g., Monkey vs. Donkey). The specific nature of the spatiotemporal pattern correlating with a word's frequency and initial phoneme demonstrates that, in the course of speech planning, lexico-semantic and phonological-articulatory processes emerge together rapidly, drawing in parallel on temporal and frontal cortex. This novel finding calls for revisions of current brain language theories of word production.
Collapse
|
40
|
Hallett M, Di Iorio R, Rossini PM, Park JE, Chen R, Celnik P, Strafella AP, Matsumoto H, Ugawa Y. Contribution of transcranial magnetic stimulation to assessment of brain connectivity and networks. Clin Neurophysiol 2017; 128:2125-2139. [PMID: 28938143 DOI: 10.1016/j.clinph.2017.08.007] [Citation(s) in RCA: 109] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2016] [Revised: 07/31/2017] [Accepted: 08/12/2017] [Indexed: 01/01/2023]
Abstract
The goal of this review is to show how transcranial magnetic stimulation (TMS) techniques can make a contribution to the study of brain networks. Brain networks are fundamental in understanding how the brain operates. Effects on remote areas can be directly observed or identified after a period of stimulation, and each section of this review will discuss one method. EEG analyzed following TMS is called TMS-evoked potentials (TEPs). A conditioning TMS can influence the effect of a test TMS given over the motor cortex. A disynaptic connection can be tested also by assessing the effect of a pre-conditioning stimulus on the conditioning-test pair. Basal ganglia-cortical relationships can be assessed using electrodes placed in the process of deep brain stimulation therapy. Cerebellar-cortical relationships can be determined using TMS over the cerebellum. Remote effects of TMS on the brain can be found as well using neuroimaging, including both positron emission tomography (PET) and functional magnetic resonance imaging (fMRI). The methods complement each other since they give different views of brain networks, and it is often valuable to use more than one technique to achieve converging evidence. The final product of this type of work is to show how information is processed and transmitted in the brain.
Collapse
Affiliation(s)
- Mark Hallett
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda, MD, USA.
| | - Riccardo Di Iorio
- Department of Geriatrics, Institute of Neurology, Neuroscience and Orthopedics, Catholic University, Policlinic A. Gemelli Foundation, Rome, Italy
| | - Paolo Maria Rossini
- Department of Geriatrics, Institute of Neurology, Neuroscience and Orthopedics, Catholic University, Policlinic A. Gemelli Foundation, Rome, Italy; Brain Connectivity Laboratory, IRCCS San Raffaele Pisana, Rome, Italy
| | - Jung E Park
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda, MD, USA; Department of Neurology, Dongguk University Ilsan Hospital, Goyang, Republic of Korea
| | - Robert Chen
- Krembil Research Institute, University of Toronto, Toronto, Canada; Department of Medicine (Neurology), University of Toronto, Toronto, Canada
| | - Pablo Celnik
- Department of Physical Medicine and Rehabilitation, Johns Hopkins School of Medicine, USA
| | - Antonio P Strafella
- Krembil Research Institute, University of Toronto, Toronto, Canada; Morton and Gloria Shulman Movement Disorder Unit & E.J. Safra Parkinson Disease Program, Toronto Western Hospital, UHN, Canada; Research Imaging Centre, Campbell Family Mental Health Research Institute, CAMH, University of Toronto, Ontario, Canada
| | | | - Yoshikazu Ugawa
- Department of Neurology, School of Medicine, Fukushima Medical University, Japan; Fukushima Global Medical Science Center, Advanced Clinical Research Center, Fukushima Medical University, Japan
| |
Collapse
|
41
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
42
|
Schomers MR, Garagnani M, Pulvermüller F. Neurocomputational Consequences of Evolutionary Connectivity Changes in Perisylvian Language Cortex. J Neurosci 2017; 37:3045-3055. [PMID: 28193685 PMCID: PMC5354338 DOI: 10.1523/jneurosci.2693-16.2017] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/20/2016] [Accepted: 01/11/2017] [Indexed: 01/07/2023] Open
Abstract
The human brain sets itself apart from that of its primate relatives by specific neuroanatomical features, especially the strong linkage of left perisylvian language areas (frontal and temporal cortex) by way of the arcuate fasciculus (AF). AF connectivity has been shown to correlate with verbal working memory-a specifically human trait providing the foundation for language abilities-but a mechanistic explanation of any related causal link between anatomical structure and cognitive function is still missing. Here, we provide a possible explanation and link, by using neurocomputational simulations in neuroanatomically structured models of the perisylvian language cortex. We compare networks mimicking key features of cortical connectivity in monkeys and humans, specifically the presence of relatively stronger higher-order "jumping links" between nonadjacent perisylvian cortical areas in the latter, and demonstrate that the emergence of working memory for syllables and word forms is a functional consequence of this structural evolutionary change. We also show that a mere increase of learning time is not sufficient, but that this specific structural feature, which entails higher connectivity degree of relevant areas and shorter sensorimotor path length, is crucial. These results offer a better understanding of specifically human anatomical features underlying the language faculty and their evolutionary selection advantage.SIGNIFICANCE STATEMENT Why do humans have superior language abilities compared to primates? Recently, a uniquely human neuroanatomical feature has been demonstrated in the strength of the arcuate fasciculus (AF), a fiber pathway interlinking the left-hemispheric language areas. Although AF anatomy has been related to linguistic skills, an explanation of how this fiber bundle may support language abilities is still missing. We use neuroanatomically structured computational models to investigate the consequences of evolutionary changes in language area connectivity and demonstrate that the human-specific higher connectivity degree and comparatively shorter sensorimotor path length implicated by the AF entail emergence of verbal working memory, a prerequisite for language learning. These results offer a better understanding of specifically human anatomical features for language and their evolutionary selection advantage.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany,
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Max Garagnani
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth PL4 8AA, United Kingdom, and
- Department of Computing, Goldsmiths, University of London, London SE14 6NW, United Kingdom
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
43
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
44
|
Nuttall HE, Kennedy-Higgins D, Devlin JT, Adank P. The role of hearing ability and speech distortion in the facilitation of articulatory motor cortex. Neuropsychologia 2016; 94:13-22. [PMID: 27884757 DOI: 10.1016/j.neuropsychologia.2016.11.016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Revised: 11/18/2016] [Accepted: 11/20/2016] [Indexed: 11/15/2022]
Abstract
Excitability of articulatory motor cortex is facilitated when listening to speech in challenging conditions. Beyond this, however, we have little knowledge of what listener-specific and speech-specific factors engage articulatory facilitation during speech perception. For example, it is unknown whether speech motor activity is independent or dependent on the form of distortion in the speech signal. It is also unknown if speech motor facilitation is moderated by hearing ability. We investigated these questions in two experiments. We applied transcranial magnetic stimulation (TMS) to the lip area of primary motor cortex (M1) in young, normally hearing participants to test if lip M1 is sensitive to the quality (Experiment 1) or quantity (Experiment 2) of distortion in the speech signal, and if lip M1 facilitation relates to the hearing ability of the listener. Experiment 1 found that lip motor evoked potentials (MEPs) were larger during perception of motor-distorted speech that had been produced using a tongue depressor, and during perception of speech presented in background noise, relative to natural speech in quiet. Experiment 2 did not find evidence of motor system facilitation when speech was presented in noise at signal-to-noise ratios where speech intelligibility was at 50% or 75%, which were significantly less severe noise levels than used in Experiment 1. However, there was a significant interaction between noise condition and hearing ability, which indicated that when speech stimuli were correctly classified at 50%, speech motor facilitation was observed in individuals with better hearing, whereas individuals with relatively worse but still normal hearing showed more activation during perception of clear speech. These findings indicate that the motor system may be sensitive to the quantity, but not quality, of degradation in the speech signal. Data support the notion that motor cortex complements auditory cortex during speech perception, and point to a role for the motor cortex in compensating for differences in hearing ability.
Collapse
Affiliation(s)
- Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster LA1 4YW, UK; Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK.
| | - Daniel Kennedy-Higgins
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, 26 Bedford Way, London WC1H 0AP, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
45
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
46
|
Kell CA, Darquea M, Behrens M, Cordani L, Keller C, Fuchs S. Phonetic detail and lateralization of reading-related inner speech and of auditory and somatosensory feedback processing during overt reading. Hum Brain Mapp 2016; 38:493-508. [PMID: 27622923 DOI: 10.1002/hbm.23398] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2016] [Revised: 08/29/2016] [Accepted: 08/30/2016] [Indexed: 11/07/2022] Open
Abstract
Phonetic detail and lateralization of inner speech during covert sentence reading as well as overt reading in 32 right-handed healthy participants undergoing 3T fMRI were investigated. The number of voiceless and voiced consonants in the processed sentences was systematically varied. Participants listened to sentences, read them covertly, silently mouthed them while reading, and read them overtly. Condition comparisons allowed for the study of effects of externally versus self-generated auditory input and of somatosensory feedback related to or independent of voicing. In every condition, increased voicing modulated bilateral voice-selective regions in the superior temporal sulcus without any lateralization. The enhanced temporal modulation and/or higher spectral frequencies of sentences rich in voiceless consonants induced left-lateralized activation of phonological regions in the posterior temporal lobe, regardless of condition. These results provide evidence that inner speech during reading codes detail as fine as consonant voicing. Our findings suggest that the fronto-temporal internal loops underlying inner speech target different temporal regions. These regions differ in their sensitivity to inner or overt acoustic speech features. More slowly varying acoustic parameters are represented more anteriorly and bilaterally in the temporal lobe while quickly changing acoustic features are processed in more posterior left temporal cortices. Furthermore, processing of external auditory feedback during overt sentence reading was sensitive to consonant voicing only in the left superior temporal cortex. Voicing did not modulate left-lateralized processing of somatosensory feedback during articulation or bilateral motor processing. This suggests voicing is primarily monitored in the auditory rather than in the somatosensory feedback channel. Hum Brain Mapp 38:493-508, 2017. © 2016 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Christian A Kell
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Maritza Darquea
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Marion Behrens
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Lorenzo Cordani
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Christian Keller
- Brain Imaging Center, Frankfurt, 60598, Germany.,Department of Neurology, Goethe University Frankfurt, Schleusenweg 2-16, Frankfurt, 60598, Germany
| | - Susanne Fuchs
- Center for General Linguistics, Schuetzenstrasse 18, Berlin, 10117, Germany
| |
Collapse
|
47
|
Hertrich I, Dietrich S, Ackermann H. The role of the supplementary motor area for speech and language processing. Neurosci Biobehav Rev 2016; 68:602-610. [PMID: 27343998 DOI: 10.1016/j.neubiorev.2016.06.030] [Citation(s) in RCA: 177] [Impact Index Per Article: 19.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 06/17/2016] [Accepted: 06/21/2016] [Indexed: 01/23/2023]
Abstract
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Susanne Dietrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| |
Collapse
|
48
|
Keller C, Kell CA. Asymmetric intra- and interhemispheric interactions during covert and overt sentence reading. Neuropsychologia 2016; 93:448-465. [PMID: 27055948 DOI: 10.1016/j.neuropsychologia.2016.04.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2015] [Revised: 04/01/2016] [Accepted: 04/03/2016] [Indexed: 01/15/2023]
Abstract
Covert and overt sentence reading evoke lateralized activations in overall bihemispheric networks. We assumed that the study of functional connectivity may reveal underlying principles of functional lateralization. Left-lateralized activations could relate to stronger reading-related modulation of intrahemispheric functional connectivity in the left than the right hemisphere. Alternatively, left-lateralization could result from suppression of contralateral processing and thus reflect asymmetric interhemispheric interactions. To address this issue, this functional MRI study investigated the regional lateralization of covert and overt German sentence reading in 39 healthy participants. Further, it revealed the modulation of the lateralized brain regions' functional connectivity and their contralateral homotopes by covert and overt reading (psychophysiological interactions). Left-lateralization during covert reading was associated with stronger intrahemispheric coupling particularly in the left dorsal stream rather than with suppression of contralateral processing. Lateralization during overt sentence reading instead went along with additional recruitment of right perisylvian cortices involved in articulation by asymmetric positive heterotopic interhemispheric interactions. Given the paucity of interhemispheric anti-correlations with homotopic regions, functional lateralization is likely a consequence of a task-dependent interplay between asymmetric positive intra- and interhemispheric coupling.
Collapse
Affiliation(s)
- Christian Keller
- Brain Imaging Center and Department of Neurology, Goethe University, Frankfurt, Germany
| | - Christian A Kell
- Brain Imaging Center and Department of Neurology, Goethe University, Frankfurt, Germany.
| |
Collapse
|
49
|
Nuttall HE, Kennedy-Higgins D, Hogan J, Devlin JT, Adank P. The effect of speech distortion on the excitability of articulatory motor cortex. Neuroimage 2016; 128:218-226. [DOI: 10.1016/j.neuroimage.2015.12.038] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2015] [Revised: 10/30/2015] [Accepted: 12/21/2015] [Indexed: 11/30/2022] Open
|
50
|
[Functional imaging of physiological and pathological speech production]. DER NERVENARZT 2015; 85:701-7. [PMID: 24832012 DOI: 10.1007/s00115-013-3996-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/11/2022]
Abstract
Numerous neurological patients suffer from speech and language disorders but the underlying pathomechanisms are not well understood. Imaging studies on speech production disorders lag behind aphasiological research on speech perception, probably due to worries concerning movement artifacts. Meanwhile, modern neuroimaging techniques allow investigation of these processes. This article summarizes the insights from neuroimaging on physiological speech production and also on the pathomechanisms underlying Parkinson's disease and developmental stuttering.
Collapse
|