1
|
Almudhi A, Gabr S. Green tea consumption and the management of adrenal stress hormones in adolescents who stutter. Biomed Rep 2022; 16:32. [PMID: 35251619 PMCID: PMC8889529 DOI: 10.3892/br.2022.1515] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2021] [Accepted: 02/14/2022] [Indexed: 11/29/2022] Open
Abstract
Green tea and its polyphenolic compounds have been shown to exert positive effects in individuals with psychological disorders. The protective role of green tea against stuttering or its related consequences, depression, anxiety and stress, were evaluated in adolescents with moderate stuttering (MS). A total of 60 adolescents aged (12-18) years old were enrolled in this study. Patients were classified according to standardized test material Stuttering Severity Instrument, 4th Edition was used to estimate the severity of stuttering; participants were classified into two groups: a normal healthy group (n=30) and a MS group (n=30). The Depression Anxiety Stress Scale and General Health Questionnaire were used to estimate the degree of depression, anxiety and stress as well as general mental health. The physiological profile of stress hormones, as a measure of the response to green tea response, was also measured amongst participants. Adrenal stress hormones cortisol, dehydroepiandrosterone (DHEA), acetylcholine (ACTH), corticosterone and the cortisol:DHEA ratio were assayed. In addition, the constituent green tea polyphenols and their quantities were determined using liquid chromatography analysis. Decaffeinated green tea was administered six cups/day for 6 weeks, and this significantly improved the depression, anxiety, stress and mental health consequences associated with stuttering in adolescents. In addition, increased consumption of green tea significantly reduced elevated levels of adrenal stress hormones; cortisol, DHEA, ACTH and corticosterone, and increased the cortisol:DHEA ratio in the control and adolescents who stuttered. The data showed that drinking six cups of decaffeinated green tea, which is enriched in catechins (1,580 mg) and other related polyphenols, was sufficient to improve the consequences of mental health associated with stuttering in younger aged individuals.
Collapse
Affiliation(s)
- Abdulaziz Almudhi
- Department of Medical Rehabilitation Sciences, College of Applied Medical Sciences, King Khalid University, Abha 61481, Saudi Arabia
| | - Sami Gabr
- Department of Anatomy and Embryology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt
| |
Collapse
|
2
|
Tamura S, Hirose N, Mitsudo T, Hoaki N, Nakamura I, Onitsuka T, Hirano Y. Multi-modal imaging of the auditory-larynx motor network for voicing perception. Neuroimage 2022; 251:118981. [PMID: 35150835 DOI: 10.1016/j.neuroimage.2022.118981] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Revised: 12/20/2021] [Accepted: 02/07/2022] [Indexed: 10/19/2022] Open
Abstract
Voicing is one of the most important characteristics of phonetic speech sounds. Despite its importance, voicing perception mechanisms remain largely unknown. To explore auditory-motor networks associated with voicing perception, we firstly examined the brain regions that showed common activities for voicing production and perception using functional magnetic resonance imaging. Results indicated that the auditory and speech motor areas were activated with the operculum parietale 4 (OP4) during both voicing production and perception. Secondly, we used a magnetoencephalography and examined the dynamical functional connectivity of the auditory-motor networks during a perceptual categorization task of /da/-/ta/ continuum stimuli varying in voice onset time (VOT) from 0 to 40 ms in 10 ms steps. Significant functional connectivities from the auditory cortical regions to the larynx motor area via OP4 were observed only when perceiving the stimulus with VOT 30 ms. In addition, regional activity analysis showed that the neural representation of VOT in the auditory cortical regions was mostly correlated with categorical perception of voicing but did not reflect the perception of stimulus with VOT 30 ms. We suggest that the larynx motor area, which is considered to play a crucial role in voicing production, contributes to categorical perception of voicing by complementing the temporal processing in the auditory cortical regions.
Collapse
Affiliation(s)
- Shunsuke Tamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan.
| | - Nobuyuki Hirose
- Faculty of Information Science and Electrical Engineering, Kyushu University, Fukuoka, Japan
| | - Takako Mitsudo
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | | | - Itta Nakamura
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Toshiaki Onitsuka
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan
| | - Yoji Hirano
- Department of Neuropsychiatry, Graduate School of Medical Sciences, Kyushu University, 3-1-1 Maidashi, Higashiku, Fukuoka 812-8582, Japan; Neural Dynamics Laboratory, Research Service, VA Boston Healthcare System, and Department of Psychiatry, Harvard Medical School, Boston, United States
| |
Collapse
|
3
|
Peelle JE, Spehar B, Jones MS, McConkey S, Myerson J, Hale S, Sommers MS, Tye-Murray N. Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception. J Neurosci 2022; 42:435-442. [PMID: 34815317 PMCID: PMC8802926 DOI: 10.1523/jneurosci.0114-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 10/29/2021] [Accepted: 11/08/2021] [Indexed: 11/21/2022] Open
Abstract
In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Brent Spehar
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Michael S Jones
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Joel Myerson
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Sandra Hale
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Nancy Tye-Murray
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| |
Collapse
|
4
|
Venezia JH, Richards VM, Hickok G. Speech-Driven Spectrotemporal Receptive Fields Beyond the Auditory Cortex. Hear Res 2021; 408:108307. [PMID: 34311190 PMCID: PMC8378265 DOI: 10.1016/j.heares.2021.108307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Revised: 06/15/2021] [Accepted: 06/30/2021] [Indexed: 10/20/2022]
Abstract
We recently developed a method to estimate speech-driven spectrotemporal receptive fields (STRFs) using fMRI. The method uses spectrotemporal modulation filtering, a form of acoustic distortion that renders speech sometimes intelligible and sometimes unintelligible. Using this method, we found significant STRF responses only in classic auditory regions throughout the superior temporal lobes. However, our analysis was not optimized to detect small clusters of STRFs as might be expected in non-auditory regions. Here, we re-analyze our data using a more sensitive multivariate statistical test for cross-subject alignment of STRFs, and we identify STRF responses in non-auditory regions including the left dorsal premotor cortex (dPM), left inferior frontal gyrus (IFG), and bilateral calcarine sulcus (calcS). All three regions responded more to intelligible than unintelligible speech, but left dPM and calcS responded significantly to vocal pitch and demonstrated strong functional connectivity with early auditory regions. Left dPM's STRF generated the best predictions of activation on trials rated as unintelligible by listeners, a hallmark auditory profile. IFG, on the other hand, responded almost exclusively to intelligible speech and was functionally connected with classic speech-language regions in the superior temporal sulcus and middle temporal gyrus. IFG's STRF was also (weakly) able to predict activation on unintelligible trials, suggesting the presence of a partial 'acoustic trace' in the region. We conclude that left dPM is part of the human dorsal laryngeal motor cortex, a region previously shown to be capable of operating in an 'auditory mode' to encode vocal pitch. Further, given previous observations that IFG is involved in syntactic working memory and/or processing of linear order, we conclude that IFG is part of a higher-order speech circuit that exerts a top-down influence on processing of speech acoustics. Finally, because calcS is modulated by emotion, we speculate that changes in the quality of vocal pitch may have contributed to its response.
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Dept. of Otolaryngology, Loma Linda University School of Medicine, Loma Linda, CA, United States.
| | - Virginia M Richards
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, United States
| | - Gregory Hickok
- Depts. of Cognitive Sciences and Language Science, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
5
|
Liu L, Yan X, Li H, Gao D, Ding G. Identifying a supramodal language network in human brain with individual fingerprint. Neuroimage 2020; 220:117131. [PMID: 32622983 DOI: 10.1016/j.neuroimage.2020.117131] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2020] [Revised: 06/21/2020] [Accepted: 06/29/2020] [Indexed: 11/26/2022] Open
Abstract
Where is human language processed in the brain independent of its form? We addressed this issue by analyzing the cortical responses to spoken, written and signed sentences at the level of individual subjects. By applying a novel fingerprinting method based on the distributed pattern of brain activity, we identified a left-lateralized network composed by the superior temporal gyrus/sulcus (STG/STS), inferior frontal gyrus (IFG), precentral gyrus/sulcus (PCG/PCS), and supplementary motor area (SMA). In these regions, the local distributed activity pattern induced by any of the three language modalities can predict the activity pattern induced by the other two modalities, and such cross-modal prediction is individual-specific. The prediction is successful for speech-sign bilinguals across all possible modality pairs, but fails for monolinguals across sign-involved pairs. In comparison, conventional group-mean focused analysis detects shared cortical activations across modalities only in the STG, PCG/PCS and SMA, and the shared activations were found in both groups. This study reveals the core language system in the brain that is shared by spoken, written and signed language, and demonstrates that it is possible and desirable to utilize the information of individual differences for functional brain mapping.
Collapse
Affiliation(s)
- Lanfang Liu
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China; State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Xin Yan
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, 48823, United States; Mental Health Center, Wenhua College, Wuhan, 430000, China
| | - Hehui Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China
| | - Dingguo Gao
- Guangdong Provincial Key Laboratory of Social Cognitive Neuroscience and Mental Health, Department of Psychology, Sun Yat-sen University, Guangzhou, 510006, China.
| | - Guosheng Ding
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University & IDG/McGovern Institute for Brain Research, Beijing, 100875, China.
| |
Collapse
|
6
|
Weber S, Hausmann M, Kane P, Weis S. The relationship between language ability and brain activity across language processes and modalities. Neuropsychologia 2020; 146:107536. [PMID: 32590019 DOI: 10.1016/j.neuropsychologia.2020.107536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2019] [Revised: 03/03/2020] [Accepted: 06/12/2020] [Indexed: 01/22/2023]
Abstract
Existing neuroimaging studies on the relationship between language ability and brain activity have found contradictory evidence: On the one hand, increased activity with higher language ability has been interpreted as deeper or more adaptive language processing. On the other hand, decreased activity with higher language ability has been interpreted as more efficient language processing. In contrast to previous studies, the current study investigated the relationship between language ability and neural activity across different language processes and modalities while keeping non-linguistic cognitive task demands to a minimum. fMRI data were collected from 22 healthy adults performing a sentence listening task, a sentence reading task and a phonological production task. Outside the MRI scanner, language ability was assessed with the verbal scale of the Wechsler Abbreviated Scale of Intelligence (WASI-II) and a verbal fluency task. As expected, sentence comprehension activated the left anterior temporal lobe while phonological processing activated the left inferior frontal gyrus. Higher language ability was associated with increased activity in the left temporal lobe during auditory sentence processing and with increased activity in the left frontal lobe during phonological processing, reflected in both, higher intensity and greater extent of activations. Evidence for decreased activity with higher language ability was less consistent and restricted to verbal fluency. Together, the results predominantly support the hypothesis of deeper language processing in individuals with higher language ability. The consistency of results across language processes, modalities, and brain regions suggests a general positive link between language abilities and brain activity within the core language network. However, a negative relationship seems to exist for non-linguistic cognitive functions located outside the language network.
Collapse
Affiliation(s)
- Sarah Weber
- Department of Psychology, Durham University, UK; Department of Biological and Medical Psychology, University of Bergen, Norway.
| | | | | | - Susanne Weis
- Institute of Systems Neuroscience, Medical Faculty, Heinrich Heine University Düsseldorf, Düsseldorf, Germany; Institute of Neuroscience and Medicine, Brain & Behaviour (INM-7), Research Centre Jülich, Jülich, Germany
| |
Collapse
|
7
|
Jenson D, Bowers AL, Hudock D, Saltuklaroglu T. The Application of EEG Mu Rhythm Measures to Neurophysiological Research in Stuttering. Front Hum Neurosci 2020; 13:458. [PMID: 31998103 PMCID: PMC6965028 DOI: 10.3389/fnhum.2019.00458] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 12/13/2019] [Indexed: 11/29/2022] Open
Abstract
Deficits in basal ganglia-based inhibitory and timing circuits along with sensorimotor internal modeling mechanisms are thought to underlie stuttering. However, much remains to be learned regarding the precise manner how these deficits contribute to disrupting both speech and cognitive functions in those who stutter. Herein, we examine the suitability of electroencephalographic (EEG) mu rhythms for addressing these deficits. We review some previous findings of mu rhythm activity differentiating stuttering from non-stuttering individuals and present some new preliminary findings capturing stuttering-related deficits in working memory. Mu rhythms are characterized by spectral peaks in alpha (8-13 Hz) and beta (14-25 Hz) frequency bands (mu-alpha and mu-beta). They emanate from premotor/motor regions and are influenced by basal ganglia and sensorimotor function. More specifically, alpha peaks (mu-alpha) are sensitive to basal ganglia-based inhibitory signals and sensory-to-motor feedback. Beta peaks (mu-beta) are sensitive to changes in timing and capture motor-to-sensory (i.e., forward model) projections. Observing simultaneous changes in mu-alpha and mu-beta across the time-course of specific events provides a rich window for observing neurophysiological deficits associated with stuttering in both speech and cognitive tasks and can provide a better understanding of the functional relationship between these stuttering symptoms. We review how independent component analysis (ICA) can extract mu rhythms from raw EEG signals in speech production tasks, such that changes in alpha and beta power are mapped to myogenic activity from articulators. We review findings from speech production and auditory discrimination tasks demonstrating that mu-alpha and mu-beta are highly sensitive to capturing sensorimotor and basal ganglia deficits associated with stuttering with high temporal precision. Novel findings from a non-word repetition (working memory) task are also included. They show reduced mu-alpha suppression in a stuttering group compared to a typically fluent group. Finally, we review current limitations and directions for future research.
Collapse
Affiliation(s)
- David Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Washington State University, Spokane, WA, United States
| | - Andrew L. Bowers
- Epley Center for Health Professions, Communication Sciences and Disorders, University of Arkansas, Fayetteville, AR, United States
| | - Daniel Hudock
- Department of Communication Sciences and Disorders, Idaho State University, Pocatello, ID, United States
| | - Tim Saltuklaroglu
- College of Health Professions, Department of Audiology and Speech-Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States
| |
Collapse
|
8
|
Maegherman G, Nuttall HE, Devlin JT, Adank P. Motor Imagery of Speech: The Involvement of Primary Motor Cortex in Manual and Articulatory Motor Imagery. Front Hum Neurosci 2019; 13:195. [PMID: 31244631 PMCID: PMC6579859 DOI: 10.3389/fnhum.2019.00195] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Accepted: 05/24/2019] [Indexed: 11/25/2022] Open
Abstract
Motor imagery refers to the phenomenon of imagining performing an action without action execution. Motor imagery and motor execution are assumed to share a similar underlying neural system that involves primary motor cortex (M1). Previous studies have focused on motor imagery of manual actions, but articulatory motor imagery has not been investigated. In this study, transcranial magnetic stimulation (TMS) was used to elicit motor-evoked potentials (MEPs) from the articulatory muscles [orbicularis oris (OO)] as well as from hand muscles [first dorsal interosseous (FDI)]. Twenty participants were asked to execute or imagine performing a simple squeezing task involving a pair of tweezers, which was comparable across both effectors. MEPs were elicited at six time points (50, 150, 250, 350, 450, 550 ms post-stimulus) to track the time course of M1 involvement in both lip and hand tasks. The results showed increased MEP amplitudes for action execution compared to rest for both effectors at time points 350, 450 and 550 ms, but we found no evidence of increased cortical activation for motor imagery. The results indicate that motor imagery does not involve M1 for simple tasks for manual or articulatory muscles. The results have implications for models of mental imagery of simple articulatory gestures, in that no evidence is found for somatotopic activation of lip muscles in sub-phonemic contexts during motor imagery of such tasks, suggesting that motor simulation of relatively simple actions does not involve M1.
Collapse
Affiliation(s)
- Gwijde Maegherman
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| | - Helen E Nuttall
- Department of Psychology, Lancaster University, Bailrigg, United Kingdom
| | - Joseph T Devlin
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, United Kingdom
| |
Collapse
|
9
|
Thornton D, Harkrider AW, Jenson DE, Saltuklaroglu T. Sex differences in early sensorimotor processing for speech discrimination. Sci Rep 2019; 9:392. [PMID: 30674942 PMCID: PMC6344575 DOI: 10.1038/s41598-018-36775-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 11/12/2018] [Indexed: 11/08/2022] Open
Abstract
Sensorimotor activity in speech perception tasks varies as a function of context, cognitive load, and cognitive ability. This study investigated listener sex as an additional variable. Raw EEG data were collected as 21 males and 21 females discriminated /ba/ and /da/ in quiet and noisy backgrounds. Independent component analyses of data from accurately discriminated trials identified sensorimotor mu components with characteristic alpha and beta peaks from 16 members of each sex. Time-frequency decompositions showed that in quiet discrimination, females displayed stronger early mu-alpha synchronization, whereas males showed stronger mu-beta desynchronization. Findings indicate that early attentional mechanisms for speech discrimination were characterized by sensorimotor inhibition in females and predictive sensorimotor activation in males. Both sexes showed stronger early sensorimotor inhibition in noisy discrimination conditions versus in quiet, suggesting sensory gating of the noise. However, the difference in neural activation between quiet and noisy conditions was greater in males than females. Though sex differences appear unrelated to behavioral accuracy, they suggest that males and females exhibit early sensorimotor processing for speech discrimination that is fundamentally different, yet similarly adaptable to adverse conditions. Findings have implications for understanding variability in neuroimaging data and the male prevalence in various neurodevelopmental disorders with inhibitory dysfunction.
Collapse
Affiliation(s)
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Knoxville, TN, 37996, USA
| | - David E Jenson
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA, 99202, USA
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, Knoxville, TN, 37996, USA
| |
Collapse
|
10
|
Saltuklaroglu T, Bowers A, Harkrider AW, Casenhiser D, Reilly KJ, Jenson DE, Thornton D. EEG mu rhythms: Rich sources of sensorimotor information in speech processing. BRAIN AND LANGUAGE 2018; 187:41-61. [PMID: 30509381 DOI: 10.1016/j.bandl.2018.09.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/27/2017] [Accepted: 09/23/2018] [Indexed: 06/09/2023]
Affiliation(s)
- Tim Saltuklaroglu
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA.
| | - Andrew Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, USA
| | - Ashley W Harkrider
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Devin Casenhiser
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Kevin J Reilly
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - David E Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Spokane, WA 99210-1495, USA
| | - David Thornton
- Department of Hearing, Speech, and Language Sciences, Gallaudet University, 800 Florida Avenue NE, Washington, DC 20002, USA
| |
Collapse
|
11
|
Panouillères MTN, Möttönen R. Decline of auditory-motor speech processing in older adults with hearing loss. Neurobiol Aging 2018; 72:89-97. [PMID: 30240945 DOI: 10.1016/j.neurobiolaging.2018.07.013] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Revised: 07/20/2018] [Accepted: 07/20/2018] [Indexed: 10/28/2022]
Abstract
Older adults often experience difficulties in understanding speech, partly because of age-related hearing loss (HL). In young adults, activity of the left articulatory motor cortex is enhanced and it interacts with the auditory cortex via the left-hemispheric dorsal stream during speech processing. Little is known about the effect of aging and age-related HL on this auditory-motor interaction and speech processing in the articulatory motor cortex. It has been proposed that upregulation of the motor system during speech processing could compensate for HL and auditory processing deficits in older adults. Alternatively, age-related auditory deficits could reduce and distort the input from the auditory cortex to the articulatory motor cortex, suppressing recruitment of the motor system during listening to speech. The aim of the present study was to investigate the effects of aging and age-related HL on the excitability of the tongue motor cortex during listening to spoken sentences using transcranial magnetic stimulation and electromyography. Our results show that the excitability of the tongue motor cortex was facilitated during listening to speech in young and older adults with normal hearing. This facilitation was significantly reduced in older adults with HL. These findings suggest a decline of auditory-motor processing of speech in adults with age-related HL.
Collapse
Affiliation(s)
- Muriel T N Panouillères
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Sports Sciences and Human Movement, CIAMS, Université Paris-Sud, Université Paris-Saclay, Orsay, France; UFR Collegium Sciences et Techniques, CIAMS, Université d'Orléans, Orléans, France.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
12
|
Glanz Iljina O, Derix J, Kaur R, Schulze-Bonhage A, Auer P, Aertsen A, Ball T. Real-life speech production and perception have a shared premotor-cortical substrate. Sci Rep 2018; 8:8898. [PMID: 29891885 PMCID: PMC5995900 DOI: 10.1038/s41598-018-26801-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 05/09/2018] [Indexed: 11/25/2022] Open
Abstract
Motor-cognitive accounts assume that the articulatory cortex is involved in language comprehension, but previous studies may have observed such an involvement as an artefact of experimental procedures. Here, we employed electrocorticography (ECoG) during natural, non-experimental behavior combined with electrocortical stimulation mapping to study the neural basis of real-life human verbal communication. We took advantage of ECoG’s ability to capture high-gamma activity (70–350 Hz) as a spatially and temporally precise index of cortical activation during unconstrained, naturalistic speech production and perception conditions. Our findings show that an electrostimulation-defined mouth motor region located in the superior ventral premotor cortex is consistently activated during both conditions. This region became active early relative to the onset of speech production and was recruited during speech perception regardless of acoustic background noise. Our study thus pinpoints a shared ventral premotor substrate for real-life speech production and perception with its basic properties.
Collapse
Affiliation(s)
- Olga Glanz Iljina
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany. .,Department of German Linguistics, University of Freiburg, Freiburg, Germany. .,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany. .,Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.
| | - Johanna Derix
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Rajbir Kaur
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Faculty of Medicine, University of Cologne, Cologne, Germany
| | - Andreas Schulze-Bonhage
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Epilepsy Center, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Peter Auer
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany.,Department of German Linguistics, University of Freiburg, Freiburg, Germany.,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany
| | - Ad Aertsen
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Tonio Ball
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
13
|
Panouillères MTN, Boyles R, Chesters J, Watkins KE, Möttönen R. Facilitation of motor excitability during listening to spoken sentences is not modulated by noise or semantic coherence. Cortex 2018; 103:44-54. [PMID: 29554541 PMCID: PMC6002609 DOI: 10.1016/j.cortex.2018.02.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/27/2017] [Accepted: 02/08/2018] [Indexed: 11/15/2022]
Abstract
Comprehending speech can be particularly challenging in a noisy environment and in the absence of semantic context. It has been proposed that the articulatory motor system would be recruited especially in difficult listening conditions. However, it remains unknown how signal-to-noise ratio (SNR) and semantic context affect the recruitment of the articulatory motor system when listening to continuous speech. The aim of the present study was to address the hypothesis that involvement of the articulatory motor cortex increases when the intelligibility and clarity of the spoken sentences decreases, because of noise and the lack of semantic context. We applied Transcranial Magnetic Stimulation (TMS) to the lip and hand representations in the primary motor cortex and measured motor evoked potentials from the lip and hand muscles, respectively, to evaluate motor excitability when young adults listened to sentences. In Experiment 1, we found that the excitability of the lip motor cortex was facilitated during listening to both semantically anomalous and coherent sentences in noise relative to non-speech baselines, but neither SNR nor semantic context modulated the facilitation. In Experiment 2, we replicated these findings and found no difference in the excitability of the lip motor cortex between sentences in noise and clear sentences without noise. Thus, our results show that the articulatory motor cortex is involved in speech processing even in optimal and ecologically valid listening conditions and that its involvement is not modulated by the intelligibility and clarity of speech.
Collapse
Affiliation(s)
| | - Rowan Boyles
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Jennifer Chesters
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, United Kingdom.
| |
Collapse
|
14
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 17.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
15
|
Schomers MR, Garagnani M, Pulvermüller F. Neurocomputational Consequences of Evolutionary Connectivity Changes in Perisylvian Language Cortex. J Neurosci 2017; 37:3045-3055. [PMID: 28193685 PMCID: PMC5354338 DOI: 10.1523/jneurosci.2693-16.2017] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2016] [Revised: 12/20/2016] [Accepted: 01/11/2017] [Indexed: 01/07/2023] Open
Abstract
The human brain sets itself apart from that of its primate relatives by specific neuroanatomical features, especially the strong linkage of left perisylvian language areas (frontal and temporal cortex) by way of the arcuate fasciculus (AF). AF connectivity has been shown to correlate with verbal working memory-a specifically human trait providing the foundation for language abilities-but a mechanistic explanation of any related causal link between anatomical structure and cognitive function is still missing. Here, we provide a possible explanation and link, by using neurocomputational simulations in neuroanatomically structured models of the perisylvian language cortex. We compare networks mimicking key features of cortical connectivity in monkeys and humans, specifically the presence of relatively stronger higher-order "jumping links" between nonadjacent perisylvian cortical areas in the latter, and demonstrate that the emergence of working memory for syllables and word forms is a functional consequence of this structural evolutionary change. We also show that a mere increase of learning time is not sufficient, but that this specific structural feature, which entails higher connectivity degree of relevant areas and shorter sensorimotor path length, is crucial. These results offer a better understanding of specifically human anatomical features underlying the language faculty and their evolutionary selection advantage.SIGNIFICANCE STATEMENT Why do humans have superior language abilities compared to primates? Recently, a uniquely human neuroanatomical feature has been demonstrated in the strength of the arcuate fasciculus (AF), a fiber pathway interlinking the left-hemispheric language areas. Although AF anatomy has been related to linguistic skills, an explanation of how this fiber bundle may support language abilities is still missing. We use neuroanatomically structured computational models to investigate the consequences of evolutionary changes in language area connectivity and demonstrate that the human-specific higher connectivity degree and comparatively shorter sensorimotor path length implicated by the AF entail emergence of verbal working memory, a prerequisite for language learning. These results offer a better understanding of specifically human anatomical features for language and their evolutionary selection advantage.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany,
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Max Garagnani
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Centre for Robotics and Neural Systems, University of Plymouth, Plymouth PL4 8AA, United Kingdom, and
- Department of Computing, Goldsmiths, University of London, London SE14 6NW, United Kingdom
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Freie Universität Berlin, 14195 Berlin, Germany
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
16
|
Hobson HM, Bishop DVM. The interpretation of mu suppression as an index of mirror neuron activity: past, present and future. ROYAL SOCIETY OPEN SCIENCE 2017; 4:160662. [PMID: 28405354 PMCID: PMC5383811 DOI: 10.1098/rsos.160662] [Citation(s) in RCA: 92] [Impact Index Per Article: 13.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 02/01/2017] [Indexed: 06/01/2023]
Abstract
Mu suppression studies have been widely used to infer the activity of the human mirror neuron system (MNS) in a number of processes, ranging from action understanding, language, empathy and the development of autism spectrum disorders (ASDs). Although mu suppression is enjoying a resurgence of interest, it has a long history. This review aimed to revisit mu's past, and examine its recent use to investigate MNS involvement in language, social processes and ASDs. Mu suppression studies have largely failed to produce robust evidence for the role of the MNS in these domains. Several key potential shortcomings with the use and interpretation of mu suppression, documented in the older literature and highlighted by more recent reports, are explored here.
Collapse
|
17
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 117] [Impact Index Per Article: 16.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
18
|
Keane A. A common neural mechanism for speech perception and movement initiation specialized for place of articulation. COGENT PSYCHOLOGY 2016. [DOI: 10.1080/23311908.2016.1233649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022] Open
Affiliation(s)
- A.M. Keane
- Psychology, National University of Ireland, Galway, Ireland
| |
Collapse
|
19
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
20
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
21
|
Peelle JE, Wingfield A. The Neural Consequences of Age-Related Hearing Loss. Trends Neurosci 2016; 39:486-497. [PMID: 27262177 DOI: 10.1016/j.tins.2016.05.001] [Citation(s) in RCA: 152] [Impact Index Per Article: 19.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2016] [Revised: 05/04/2016] [Accepted: 05/09/2016] [Indexed: 01/02/2023]
Abstract
During hearing, acoustic signals travel up the ascending auditory pathway from the cochlea to auditory cortex; efferent connections provide descending feedback. In human listeners, although auditory and cognitive processing have sometimes been viewed as separate domains, a growing body of work suggests they are intimately coupled. Here, we review the effects of hearing loss on neural systems supporting spoken language comprehension, beginning with age-related physiological decline. We suggest that listeners recruit domain general executive systems to maintain successful communication when the auditory signal is degraded, but that this compensatory processing has behavioral consequences: even relatively mild levels of hearing loss can lead to cascading cognitive effects that impact perception, comprehension, and memory, leading to increased listening effort during speech comprehension.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St Louis, St Louis, MO, USA.
| | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, USA.
| |
Collapse
|
22
|
Wayne RV, Hamilton C, Jones Huyck J, Johnsrude IS. Working Memory Training and Speech in Noise Comprehension in Older Adults. Front Aging Neurosci 2016; 8:49. [PMID: 27047370 PMCID: PMC4801856 DOI: 10.3389/fnagi.2016.00049] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2015] [Accepted: 02/22/2016] [Indexed: 11/16/2022] Open
Abstract
Understanding speech in the presence of background sound can be challenging for older adults. Speech comprehension in noise appears to depend on working memory and executive-control processes (e.g., Heald and Nusbaum, 2014), and their augmentation through training may have rehabilitative potential for age-related hearing loss. We examined the efficacy of adaptive working-memory training (Cogmed; Klingberg et al., 2002) in 24 older adults, assessing generalization to other working-memory tasks (near-transfer) and to other cognitive domains (far-transfer) using a cognitive test battery, including the Reading Span test, sensitive to working memory (e.g., Daneman and Carpenter, 1980). We also assessed far transfer to speech-in-noise performance, including a closed-set sentence task (Kidd et al., 2008). To examine the effect of cognitive training on benefit obtained from semantic context, we also assessed transfer to open-set sentences; half were semantically coherent (high-context) and half were semantically anomalous (low-context). Subjects completed 25 sessions (0.5–1 h each; 5 sessions/week) of both adaptive working memory training and placebo training over 10 weeks in a crossover design. Subjects' scores on the adaptive working-memory training tasks improved as a result of training. However, training did not transfer to other working memory tasks, nor to tasks recruiting other cognitive domains. We did not observe any training-related improvement in speech-in-noise performance. Measures of working memory correlated with the intelligibility of low-context, but not high-context, sentences, suggesting that sentence context may reduce the load on working memory. The Reading Span test significantly correlated only with a test of visual episodic memory, suggesting that the Reading Span test is not a pure-test of working memory, as is commonly assumed.
Collapse
Affiliation(s)
- Rachel V Wayne
- Department of Psychology, Queen's University Kingston, ON, Canada
| | - Cheryl Hamilton
- Department of Psychology, Queen's University Kingston, ON, Canada
| | | | - Ingrid S Johnsrude
- Department of Psychology, Queen's UniversityKingston, ON, Canada; Department of Psychology, School of Communication Sciences and Disorders, The Brain and Mind Institute, University of Western OntarioLondon, ON, Canada
| |
Collapse
|
23
|
Nuttall HE, Kennedy-Higgins D, Hogan J, Devlin JT, Adank P. The effect of speech distortion on the excitability of articulatory motor cortex. Neuroimage 2016; 128:218-226. [DOI: 10.1016/j.neuroimage.2015.12.038] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2015] [Revised: 10/30/2015] [Accepted: 12/21/2015] [Indexed: 11/30/2022] Open
|
24
|
Jenson D, Harkrider AW, Thornton D, Bowers AL, Saltuklaroglu T. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm. Front Hum Neurosci 2015; 9:534. [PMID: 26500519 PMCID: PMC4597480 DOI: 10.3389/fnhum.2015.00534] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/14/2015] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
25
|
Turner BO, Marinsek N, Ryhal E, Miller MB. Hemispheric lateralization in reasoning. Ann N Y Acad Sci 2015; 1359:47-64. [PMID: 26426534 DOI: 10.1111/nyas.12940] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2015] [Revised: 08/17/2015] [Accepted: 08/20/2015] [Indexed: 11/30/2022]
Abstract
A growing body of evidence suggests that reasoning in humans relies on a number of related processes whose neural loci are largely lateralized to one hemisphere or the other. A recent review of this evidence concluded that the patterns of lateralization observed are organized according to two complementary tendencies. The left hemisphere attempts to reduce uncertainty by drawing inferences or creating explanations, even at the cost of ignoring conflicting evidence or generating implausible explanations. Conversely, the right hemisphere aims to reduce conflict by rejecting or refining explanations that are no longer tenable in the face of new evidence. In healthy adults, the hemispheres work together to achieve a balance between certainty and consistency, and a wealth of neuropsychological research supports the notion that upsetting this balance results in various failures in reasoning, including delusions. However, support for this model from the neuroimaging literature is mixed. Here, we examine the evidence for this framework from multiple research domains, including an activation likelihood estimation analysis of functional magnetic resonance imaging studies of reasoning. Our results suggest a need to either revise this model as it applies to healthy adults or to develop better tools for assessing lateralization in these individuals.
Collapse
Affiliation(s)
- Benjamin O Turner
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California
| | - Nicole Marinsek
- Dynamical Neuroscience, University of California Santa Barbara, Santa Barbara, California
| | - Emily Ryhal
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California
| | - Michael B Miller
- Department of Psychological & Brain Sciences, University of California Santa Barbara, Santa Barbara, California
| |
Collapse
|
26
|
Abstract
Models propose an auditory-motor mapping via a left-hemispheric dorsal speech-processing stream, yet its detailed contributions to speech perception and production are unclear. Using fMRI-navigated repetitive transcranial magnetic stimulation (rTMS), we virtually lesioned left dorsal stream components in healthy human subjects and probed the consequences on speech-related facilitation of articulatory motor cortex (M1) excitability, as indexed by increases in motor-evoked potential (MEP) amplitude of a lip muscle, and on speech processing performance in phonological tests. Speech-related MEP facilitation was disrupted by rTMS of the posterior superior temporal sulcus (pSTS), the sylvian parieto-temporal region (SPT), and by double-knock-out but not individual lesioning of pars opercularis of the inferior frontal gyrus (pIFG) and the dorsal premotor cortex (dPMC), and not by rTMS of the ventral speech-processing stream or an occipital control site. RTMS of the dorsal stream but not of the ventral stream or the occipital control site caused deficits specifically in the processing of fast transients of the acoustic speech signal. Performance of syllable and pseudoword repetition correlated with speech-related MEP facilitation, and this relation was abolished with rTMS of pSTS, SPT, and pIFG. Findings provide direct evidence that auditory-motor mapping in the left dorsal stream causes reliable and specific speech-related MEP facilitation in left articulatory M1. The left dorsal stream targets the articulatory M1 through pSTS and SPT constituting essential posterior input regions and parallel via frontal pathways through pIFG and dPMC. Finally, engagement of the left dorsal stream is necessary for processing of fast transients in the auditory signal.
Collapse
|
27
|
Abstract
A fundamental goal of the human auditory system is to map complex acoustic signals onto stable internal representations of the basic sound patterns of speech. Phonemes and the distinctive features that they comprise constitute the basic building blocks from which higher-level linguistic representations, such as words and sentences, are formed. Although the neural structures underlying phonemic representations have been well studied, there is considerable debate regarding frontal-motor cortical contributions to speech as well as the extent of lateralization of phonological representations within auditory cortex. Here we used functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis to investigate the distributed patterns of activation that are associated with the categorical and perceptual similarity structure of 16 consonant exemplars in the English language used in Miller and Nicely's (1955) classic study of acoustic confusability. Participants performed an incidental task while listening to phonemes in the MRI scanner. Neural activity in bilateral anterior superior temporal gyrus and supratemporal plane was correlated with the first two components derived from a multidimensional scaling analysis of a behaviorally derived confusability matrix. We further showed that neural representations corresponding to the categorical features of voicing, manner of articulation, and place of articulation were widely distributed throughout bilateral primary, secondary, and association areas of the superior temporal cortex, but not motor cortex. Although classification of phonological features was generally bilateral, we found that multivariate pattern information was moderately stronger in the left compared with the right hemisphere for place but not for voicing or manner of articulation.
Collapse
|
28
|
Archila-Suerte P, Zevin J, Hernandez AE. The effect of age of acquisition, socioeducational status, and proficiency on the neural processing of second language speech sounds. BRAIN AND LANGUAGE 2015; 141:35-49. [PMID: 25528287 PMCID: PMC5956909 DOI: 10.1016/j.bandl.2014.11.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 11/06/2014] [Accepted: 11/09/2014] [Indexed: 06/02/2023]
Abstract
This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds.
Collapse
Affiliation(s)
| | - Jason Zevin
- Sackler Institute for Developmental Psychobiology, Weill Medical College of Cornell University, 1300 York Ave., Box 140, NY, NY 10065, United States.
| | | |
Collapse
|
29
|
Suarez RO, Taimouri V, Boyer K, Vega C, Rotenberg A, Madsen JR, Loddenkemper T, Duffy FH, Prabhu SP, Warfield SK. Passive fMRI mapping of language function for pediatric epilepsy surgical planning: validation using Wada, ECS, and FMAER. Epilepsy Res 2014; 108:1874-88. [PMID: 25445239 DOI: 10.1016/j.eplepsyres.2014.09.016] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Revised: 09/07/2014] [Accepted: 09/13/2014] [Indexed: 11/25/2022]
Abstract
In this study we validate passive language fMRI protocols designed for clinical application in pediatric epilepsy surgical planning as they do not require overt participation from patients. We introduced a set of quality checks that assess reliability of noninvasive fMRI mappings utilized for clinical purposes. We initially compared two fMRI language mapping paradigms, one active in nature (requiring participation from the patient) and the other passive in nature (requiring no participation from the patient). Group-level analysis in a healthy control cohort demonstrated similar activation of the putative language centers of the brain in the inferior frontal (IFG) and temporoparietal (TPG) regions. Additionally, we showed that passive language fMRI produced more left-lateralized activation in TPG (LI=+0.45) compared to the active task; with similarly robust left-lateralized IFG (LI=+0.24) activations using the passive task. We validated our recommended fMRI mapping protocols in a cohort of 15 pediatric epilepsy patients by direct comparison against the invasive clinical gold-standards. We found that language-specific TPG activation by fMRI agreed to within 9.2mm to subdural localizations by invasive functional mapping in the same patients, and language dominance by fMRI agreed with Wada test results at 80% congruency in TPG and 73% congruency in IFG. Lastly, we tested the recommended passive language fMRI protocols in a cohort of very young patients and confirmed reliable language-specific activation patterns in that challenging cohort. We concluded that language activation maps can be reliably achieved using the passive language fMRI protocols we proposed even in very young (average 7.5 years old) or sedated pediatric epilepsy patients.
Collapse
Affiliation(s)
- Ralph O Suarez
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA.
| | - Vahid Taimouri
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Katrina Boyer
- Department of Psychology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Clemente Vega
- Department of Psychology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA; Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Alexander Rotenberg
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Joseph R Madsen
- Department of Neurosurgery, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Tobias Loddenkemper
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Frank H Duffy
- Department of Neurology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Sanjay P Prabhu
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| | - Simon K Warfield
- Department of Radiology, Boston Children's Hospital, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
30
|
Oh A, Duerden EG, Pang EW. The role of the insula in speech and language processing. BRAIN AND LANGUAGE 2014; 135:96-103. [PMID: 25016092 PMCID: PMC4885738 DOI: 10.1016/j.bandl.2014.06.003] [Citation(s) in RCA: 130] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 01/24/2014] [Accepted: 06/15/2014] [Indexed: 05/13/2023]
Abstract
Lesion and neuroimaging studies indicate that the insula mediates motor aspects of speech production, specifically, articulatory control. Although it has direct connections to Broca's area, the canonical speech production region, the insula is also broadly connected with other speech and language centres, and may play a role in coordinating higher-order cognitive aspects of speech and language production. The extent of the insula's involvement in speech and language processing was assessed using the Activation Likelihood Estimation (ALE) method. Meta-analyses of 42 fMRI studies with healthy adults were performed, comparing insula activation during performance of language (expressive and receptive) and speech (production and perception) tasks. Both tasks activated bilateral anterior insulae. However, speech perception tasks preferentially activated the left dorsal mid-insula, whereas expressive language tasks activated left ventral mid-insula. Results suggest distinct regions of the mid-insula play different roles in speech and language processing.
Collapse
Affiliation(s)
- Anna Oh
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada
| | - Emma G Duerden
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada; Diagnostic Imaging, Hospital for Sick Children, Toronto, Canada; Department of Paediatrics, University of Toronto, Toronto, Canada
| | - Elizabeth W Pang
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada; Neurology, Hospital for Sick Children, Toronto, Canada; Department of Paediatrics, University of Toronto, Toronto, Canada.
| |
Collapse
|
31
|
Jenson D, Bowers AL, Harkrider AW, Thornton D, Cuellar M, Saltuklaroglu T. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data. Front Psychol 2014; 5:656. [PMID: 25071633 PMCID: PMC4091311 DOI: 10.3389/fpsyg.2014.00656] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022] Open
Abstract
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Megan Cuellar
- Speech-Language Pathology Program, College of Health Sciences, Midwestern UniversityChicago, IL, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
32
|
Anderson JAE, Campbell KL, Amer T, Grady CL, Hasher L. Timing is everything: Age differences in the cognitive control network are modulated by time of day. Psychol Aging 2014; 29:648-657. [PMID: 24999661 DOI: 10.1037/a0037243] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Behavioral evidence suggests that the attention-based ability to regulate distraction varies across the day in synchrony with a circadian arousal rhythm that changes across the life span. Using functional magnetic resonance imaging (fMRI), we assessed whether neural activity in an attention control network also varies across the day and with behavioral markers. We tested older adults in the morning or afternoon and younger adults tested in the afternoon using a 1-back task with superimposed distractors, followed by an implicit test for the distractors. Behavioral results replicated earlier findings with older adults tested in the morning better able to ignore distraction than those tested in the afternoon. Imaging results showed that time of testing modulates task-related fMRI signals in older adults and that age differences were reduced when older adults are tested at peak times of day. In particular, older adults tested in the morning activated similar cognitive control regions to those activated by young adults (rostral prefrontal and superior parietal cortex), whereas older adults tested in the afternoon were reliably different; furthermore, the degree to which participants were able to activate the control regions listed above correlated with the ability to suppress distracting information.
Collapse
|
33
|
Bowers AL, Saltuklaroglu T, Harkrider A, Wilson M, Toner MA. Dynamic modulation of shared sensory and motor cortical rhythms mediates speech and non-speech discrimination performance. Front Psychol 2014; 5:366. [PMID: 24847290 PMCID: PMC4019855 DOI: 10.3389/fpsyg.2014.00366] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 04/07/2014] [Indexed: 01/17/2023] Open
Abstract
Oscillatory models of speech processing have proposed that rhythmic cortical oscillations in sensory and motor regions modulate speech sound processing from the bottom-up via phase reset at low frequencies (3-10 Hz) and from the top-down via the disinhibition of alpha/beta rhythms (8-30 Hz). To investigate how the proposed rhythms mediate perceptual performance, electroencephalographic (EEG) was recorded while participants passively listened to or actively identified speech and tone-sweeps in a two-force choice in noise discrimination task presented at high and low signal-to-noise ratios. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Left and right hemisphere sensorimotor and posterior temporal lobe clusters were identified. Alpha and beta suppression was associated with active tasks only in sensorimotor and temporal clusters. In posterior temporal clusters, increases in phase reset at low frequencies were driven by the quality of bottom-up acoustic information for speech and non-speech stimuli, whereas phase reset in sensorimotor clusters was associated with top-down active task demands. A comparison of correct discrimination trials to those identified at chance showed an earlier performance related effect for the left sensorimotor cluster relative to the left-temporal lobe cluster during the syllable discrimination task only. The right sensorimotor cluster was associated with performance related differences for tone-sweep stimuli only. Findings are consistent with internal model accounts suggesting that early efferent sensorimotor models transmitted along alpha and beta channels reflect a release from inhibition related to active attention to auditory discrimination. Results are discussed in the broader context of dynamic, oscillatory models of cognition proposing that top-down internally generated states interact with bottom-up sensory processing to enhance task performance.
Collapse
Affiliation(s)
- Andrew L Bowers
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Ashley Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Matt Wilson
- School of Allied Health, Northern Illinois University, DeKalb IL, USA
| | - Mary A Toner
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| |
Collapse
|
34
|
Alho J, Lin FH, Sato M, Tiitinen H, Sams M, Jääskeläinen IP. Enhanced neural synchrony between left auditory and premotor cortex is associated with successful phonetic categorization. Front Psychol 2014; 5:394. [PMID: 24834062 PMCID: PMC4018533 DOI: 10.3389/fpsyg.2014.00394] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 04/14/2014] [Indexed: 11/13/2022] Open
Abstract
The cortical dorsal auditory stream has been proposed to mediate mapping between auditory and articulatory-motor representations in speech processing. Whether this sensorimotor integration contributes to speech perception remains an open question. Here, magnetoencephalography was used to examine connectivity between auditory and motor areas while subjects were performing a sensorimotor task involving speech sound identification and overt repetition. Functional connectivity was estimated with inter-areal phase synchrony of electromagnetic oscillations. Structural equation modeling was applied to determine the direction of information flow. Compared to passive listening, engagement in the sensorimotor task enhanced connectivity within 200 ms after sound onset bilaterally between the temporoparietal junction (TPJ) and ventral premotor cortex (vPMC), with the left-hemisphere connection showing directionality from vPMC to TPJ. Passive listening to noisy speech elicited stronger connectivity than clear speech between left auditory cortex (AC) and vPMC at ~100 ms, and between left TPJ and dorsal premotor cortex (dPMC) at ~200 ms. Information flow was estimated from AC to vPMC and from dPMC to TPJ. Connectivity strength among the left AC, vPMC, and TPJ correlated positively with the identification of speech sounds within 150 ms after sound onset, with information flowing from AC to TPJ, from AC to vPMC, and from vPMC to TPJ. Taken together, these findings suggest that sensorimotor integration mediates the categorization of incoming speech sounds through reciprocal auditory-to-motor and motor-to-auditory projections.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Fa-Hsuan Lin
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; Institute of Biomedical Engineering, National Taiwan University Taipei, Taiwan
| | - Marc Sato
- Gipsa-Lab, Department of Speech and Cognition, French National Center for Scientific Research and Grenoble University Grenoble, France
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; MEG Core, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland ; AMI Centre, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland
| |
Collapse
|
35
|
Guediche S, Blumstein SE, Fiez JA, Holt LL. Speech perception under adverse conditions: insights from behavioral, computational, and neuroscience research. Front Syst Neurosci 2014; 7:126. [PMID: 24427119 PMCID: PMC3879477 DOI: 10.3389/fnsys.2013.00126] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2013] [Accepted: 12/16/2013] [Indexed: 01/06/2023] Open
Abstract
Adult speech perception reflects the long-term regularities of the native language, but it is also flexible such that it accommodates and adapts to adverse listening conditions and short-term deviations from native-language norms. The purpose of this article is to examine how the broader neuroscience literature can inform and advance research efforts in understanding the neural basis of flexibility and adaptive plasticity in speech perception. Specifically, we highlight the potential role of learning algorithms that rely on prediction error signals and discuss specific neural structures that are likely to contribute to such learning. To this end, we review behavioral studies, computational accounts, and neuroimaging findings related to adaptive plasticity in speech perception. Already, a few studies have alluded to a potential role of these mechanisms in adaptive plasticity in speech perception. Furthermore, we consider research topics in neuroscience that offer insight into how perception can be adaptively tuned to short-term deviations while balancing the need to maintain stability in the perception of learned long-term regularities. Consideration of the application and limitations of these algorithms in characterizing flexible speech perception under adverse conditions promises to inform theoretical models of speech.
Collapse
Affiliation(s)
- Sara Guediche
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
| | - Sheila E. Blumstein
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown UniversityProvidence, RI, USA
- Department of Cognitive, Linguistic, and Psychological Sciences, Brain Institute, Brown UniversityProvidence, RI, USA
| | - Julie A. Fiez
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
| | - Lori L. Holt
- Department of Neuroscience, Center for Neuroscience at the University of Pittsburgh, University of PittsburghPittsburgh, PA, USA
- Department of Psychology at Carnegie Mellon University and Department of Neuroscience at the University of Pittsburgh, Center for the Neural Basis of CognitionPittsburgh, PA, USA
- Department of Psychology, Carnegie Mellon UniversityPittsburgh, PA, USA
| |
Collapse
|
36
|
Suppression of the µ rhythm during speech and non-speech discrimination revealed by independent component analysis: implications for sensorimotor integration in speech processing. PLoS One 2013; 8:e72024. [PMID: 23991030 PMCID: PMC3750026 DOI: 10.1371/journal.pone.0072024] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2012] [Accepted: 07/11/2013] [Indexed: 01/17/2023] Open
Abstract
Background Constructivist theories propose that articulatory hypotheses about incoming phonetic targets may function to enhance perception by limiting the possibilities for sensory analysis. To provide evidence for this proposal, it is necessary to map ongoing, high-temporal resolution changes in sensorimotor activity (i.e., the sensorimotor μ rhythm) to accurate speech and non-speech discrimination performance (i.e., correct trials.) Methods Sixteen participants (15 female and 1 male) were asked to passively listen to or actively identify speech and tone-sweeps in a two-force choice discrimination task while the electroencephalograph (EEG) was recorded from 32 channels. The stimuli were presented at signal-to-noise ratios (SNRs) in which discrimination accuracy was high (i.e., 80–100%) and low SNRs producing discrimination performance at chance. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Results ICA revealed left and right sensorimotor µ components for 14/16 and 13/16 participants respectively that were identified on the basis of scalp topography, spectral peaks, and localization to the precentral and postcentral gyri. Time-frequency analysis of left and right lateralized µ component clusters revealed significant (pFDR<.05) suppression in the traditional beta frequency range (13–30 Hz) prior to, during, and following syllable discrimination trials. No significant differences from baseline were found for passive tasks. Tone conditions produced right µ beta suppression following stimulus onset only. For the left µ, significant differences in the magnitude of beta suppression were found for correct speech discrimination trials relative to chance trials following stimulus offset. Conclusions Findings are consistent with constructivist, internal model theories proposing that early forward motor models generate predictions about likely phonemic units that are then synthesized with incoming sensory cues during active as opposed to passive processing. Future directions and possible translational value for clinical populations in which sensorimotor integration may play a functional role are discussed.
Collapse
|
37
|
Abstract
Debates about motor theories of speech perception have recently been reignited by a burst of reports implicating premotor cortex (PMC) in speech perception. Often, however, these debates conflate perceptual and decision processes. Evidence that PMC activity correlates with task difficulty and subject performance suggests that PMC might be recruited, in certain cases, to facilitate category judgments about speech sounds (rather than speech perception, which involves decoding of sounds). However, it remains unclear whether PMC does, indeed, exhibit neural selectivity that is relevant for speech decisions. Further, it is unknown whether PMC activity in such cases reflects input via the dorsal or ventral auditory pathway, and whether PMC processing of speech is automatic or task-dependent. In a novel modified categorization paradigm, we presented human subjects with paired speech sounds from a phonetic continuum but diverted their attention from phoneme category using a challenging dichotic listening task. Using fMRI rapid adaptation to probe neural selectivity, we observed acoustic-phonetic selectivity in left anterior and left posterior auditory cortical regions. Conversely, we observed phoneme-category selectivity in left PMC that correlated with explicit phoneme-categorization performance measured after scanning, suggesting that PMC recruitment can account for performance on phoneme-categorization tasks. Structural equation modeling revealed connectivity from posterior, but not anterior, auditory cortex to PMC, suggesting a dorsal route for auditory input to PMC. Our results provide evidence for an account of speech processing in which the dorsal stream mediates automatic sensorimotor integration of speech and may be recruited to support speech decision tasks.
Collapse
|
38
|
van Geemen K, Herbet G, Moritz-Gasser S, Duffau H. Limited plastic potential of the left ventral premotor cortex in speech articulation: evidence from intraoperative awake mapping in glioma patients. Hum Brain Mapp 2013; 35:1587-96. [PMID: 23616288 DOI: 10.1002/hbm.22275] [Citation(s) in RCA: 77] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2012] [Revised: 01/13/2013] [Accepted: 01/31/2013] [Indexed: 11/11/2022] Open
Abstract
OBJECTIVES Despite previous lesional and functional neuroimaging studies, the actual role of the left ventral premotor cortex (vPMC), i.e., the lateral part of the precentral gyrus, is still poorly known. EXPERIMENTAL DESIGN We report a series of eight patients with a glioma involving the left vPMC, who underwent awake surgery with intraoperative cortical and subcortical language mapping. The function of the vPMC, its subcortical connections, and its reorganization potential are investigated in the light of surgical findings and language outcome after resection. PRINCIPAL OBSERVATIONS Electrostimulation of both the vPMC and subcortical white matter tract underneath the vPMC, that is, the anterior segment of the lateral part of the superior longitudinal fascicle (SLF), induced speech production disturbances with anarthria in all cases. Moreover, although some degrees of redistribution of the vPMC have been found in four patients, allowing its partial resection with no permanent speech disorders, this area was nonetheless still detected more medially in the precentral gyrus in the eight patients, despite its invasion by the glioma. Moreover, a direct connection of the vPMC with the SLF was preserved in all cases. CONCLUSIONS Our original data suggest that the vPMC plays a crucial role in the speech production network and that its plastic potential is limited. We propose that this limitation is due to an anatomical constraint, namely the necessity for the left vPMC to remain connected to the lateral SLF. Beyond fundamental implications, such knowledge may have clinical applications, especially in surgery for tumors involving this cortico-subcortical circuit.
Collapse
Affiliation(s)
- Kim van Geemen
- Department of Neurosurgery, Gui de Chauliac Hospital, Montpellier University Medical Centre, Montpellier, France
| | | | | | | |
Collapse
|
39
|
Tremblay P, Dick AS, Small SL. Functional and structural aging of the speech sensorimotor neural system: functional magnetic resonance imaging evidence. Neurobiol Aging 2013; 34:1935-51. [PMID: 23523270 DOI: 10.1016/j.neurobiolaging.2013.02.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2012] [Revised: 01/16/2013] [Accepted: 02/09/2013] [Indexed: 11/19/2022]
Abstract
The ability to perceive and produce speech undergoes important changes in late adulthood. The goal of the present study was to characterize functional and structural age-related differences in the cortical network that support speech perception and production, using magnetic resonance imaging, as well as the relationship between functional and structural age-related changes occurring in this network. We asked young and older adults to observe videos of a speaker producing single words (perception), and to observe and repeat the words produced (production). Results show a widespread bilateral network of brain activation for Perception and Production that was not correlated with age. In addition, several regions did show age-related change (auditory cortex, planum temporale, superior temporal sulcus, premotor cortices, SMA-proper). Examination of the relationship between brain signal and regional and global gray matter volume and cortical thickness revealed a complex set of relationships between structure and function, with some regions showing a relationship between structure and function and some not. The present results provide novel findings about the neurobiology of aging and verbal communication.
Collapse
Affiliation(s)
- Pascale Tremblay
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec, Department of Rehabilitation, Université Laval, Québec City, Québec, Canada.
| | | | | |
Collapse
|