1
|
Zhang W, Jiang M, Teo KAC, Bhuvanakantham R, Fong L, Sim WKJ, Guo Z, Foo CHV, Chua RHJ, Padmanabhan P, Leong V, Lu J, Gulyás B, Guan C. Revealing the spatiotemporal brain dynamics of covert speech compared with overt speech: A simultaneous EEG-fMRI study. Neuroimage 2024; 293:120629. [PMID: 38697588 DOI: 10.1016/j.neuroimage.2024.120629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 04/17/2024] [Accepted: 04/29/2024] [Indexed: 05/05/2024] Open
Abstract
Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.
Collapse
Affiliation(s)
- Wei Zhang
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Muyun Jiang
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | - Kok Ann Colin Teo
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore; Division of Neurosurgery, National University Health System, Singapore
| | - Raghavan Bhuvanakantham
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - LaiGuan Fong
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore
| | - Wei Khang Jeremy Sim
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
| | - Zhiwei Guo
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | | | | | - Parasuraman Padmanabhan
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Victoria Leong
- Division of Psychology, Nanyang Technological University, Singapore; Department of Pediatrics, University of Cambridge, United Kingdom
| | - Jia Lu
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; DSO National Laboratories, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Balázs Gulyás
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| |
Collapse
|
2
|
Anastasopoulou I, Cheyne DO, van Lieshout P, Johnson BW. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography. Front Hum Neurosci 2024; 18:1305058. [PMID: 38646159 PMCID: PMC11027130 DOI: 10.3389/fnhum.2024.1305058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 02/26/2024] [Indexed: 04/23/2024] Open
Abstract
Introduction Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/ and /api/, produced at normal and faster rates. Results The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higher-frequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system.
Collapse
Affiliation(s)
| | - Douglas Owen Cheyne
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Pascal van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
3
|
Sarmet M, Santos DB, Mangilli LD, Million JL, Maldaner V, Zeredo JL. Chronic respiratory failure negatively affects speech function in patients with bulbar and spinal onset amyotrophic lateral sclerosis: retrospective data from a tertiary referral center. LOGOP PHONIATR VOCO 2024; 49:17-26. [PMID: 35767076 DOI: 10.1080/14015439.2022.2092209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2021] [Revised: 02/04/2022] [Accepted: 06/15/2022] [Indexed: 10/17/2022]
Abstract
Background: Although dysarthria and respiratory failure are widely described in literature as part of the natural history of Amyotrophic lateral sclerosis (ALS), the specific interaction between them has been little explored.Aim: To investigate the relationship between chronic respiratory failure and the speech of ALS patients.Materials and methods: In this cross-sectional retrospective study we reviewed the medical records of all patients diagnosed with ALS that were accompanied by a tertiary referral center. In order to determine the presence and degree of speech impairment, the Amyotrophic Lateral Sclerosis Functional Rating Scale-revised (ALSFRS-R) speech sub-scale was used. Respiratory function was assessed through spirometry and through venous blood gasometry obtained from a morning peripheral venous sample. To determine whether differences among groups classified by speech function were significant, maximum and mean spirometry values of participants were compared using multivariate analysis of variance (MANOVA) with Tukey's post hoc test.Results: Seventy-five cases were selected, of which 73.3% presented speech impairment and 70.7% respiratory impairment. Respiratory and speech functions were moderately correlated (seated FVC r = 0.64; supine FVC r = 0.60; seated FEV1 r = 0.59 and supine FEV1 r = 0.54, p < .001). Multivariable logistic regression revealed that the following variables were significantly associated with the presence of speech impairment after adjusting for other risk factors: seated FVC (odds ratio [OR] = 0.862) and seated FEV1 (OR = 1.106). The final model was 81.1% predictive of speech impairment. The presence of daytime hypercapnia was not correlated to increasing speech impairment.Conclusion: The restrictive pattern developed by ALS patients negatively influences speech function. Speech is a complex and multifactorial process, and lung volume presents a pivotal role in its function. Thus, we were able to find that lung volumes presented a significant correlation to speech function, especially in those with bulbar onset and respiratory impairment. Neurobiological and physiological aspects of this relationship should be explored in further studies with the ALS population.
Collapse
Affiliation(s)
- Max Sarmet
- Graduate Department of Health Science and Technology, University of Brasília (UnB), Brasília, Brazil
- Hospital de Apoio de Brasília (HAB), Tertiary Referral Center of Neuromuscular Diseases, Brasília, Brazil
| | - Dante Brasil Santos
- Hospital de Apoio de Brasília (HAB), Tertiary Referral Center of Neuromuscular Diseases, Brasília, Brazil
- UniEvangélica, Graduate Program of Human Movement and Rehabilitation, Anápolis, Brazil
| | | | - Janae Lyon Million
- Department of Human Biology, University of California Santa Cruz, Santa Cruz, CA, United States of America
| | - Vinicius Maldaner
- Hospital de Apoio de Brasília (HAB), Tertiary Referral Center of Neuromuscular Diseases, Brasília, Brazil
- UniEvangélica, Graduate Program of Human Movement and Rehabilitation, Anápolis, Brazil
| | - Jorge L Zeredo
- Graduate Department of Health Science and Technology, University of Brasília (UnB), Brasília, Brazil
| |
Collapse
|
4
|
Mårup SH, Kleber BA, Møller C, Vuust P. When direction matters: Neural correlates of interlimb coordination of rhythm and beat. Cortex 2024; 172:86-108. [PMID: 38241757 DOI: 10.1016/j.cortex.2023.11.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 04/11/2023] [Accepted: 11/09/2023] [Indexed: 01/21/2024]
Abstract
In a previous experiment, we found evidence for a bodily hierarchy governing interlimb coordination of rhythm and beat, using five effectors: 1) Left foot, 2) Right foot, 3) Left hand, 4) Right hand and 5) Voice. The hierarchy implies that, during simultaneous rhythm and beat performance and using combinations of two of these effectors, executing the task by performing the rhythm with an effector that has a higher number than the beat effector is significantly easier than vice versa. To investigate the neural underpinnings of this proposed bodily hierarchy, we here scanned 46 professional musicians using fMRI as they performed a rhythmic pattern with one effector while keeping the beat with another. The conditions combined the voice and the right hand (V + RH), the right hand and the left hand (RH + LH), and the left hand and the right foot (LH + RF). Each effector combination was performed with and against the bodily hierarchy. Going against the bodily hierarchy increased tapping errors significantly and also increased activity in key brain areas functionally associated with top-down sensorimotor control and bottom-up feedback processing, such as the cerebellum and SMA. Conversely, going with the bodily hierarchy engaged areas functionally associated with the default mode network and regions involved in emotion processing. Theories of general brain function that hold prediction as a key principle, propose that action and perception are governed by the brain's attempt to minimise prediction error at different levels in the brain. Following this viewpoint, our results indicate that going against the hierarchy induces stronger prediction errors, while going with the hierarchy allows for a higher degree of automatization. Our results also support the notion of a bodily hierarchy in motor control that prioritizes certain conductive and supportive tapping roles in specific effector combinations.
Collapse
Affiliation(s)
- Signe H Mårup
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Boris A Kleber
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Cecilie Møller
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Universitetsbyen 3, Aarhus C, Denmark.
| |
Collapse
|
5
|
Vitória MA, Fernandes FG, van den Boom M, Ramsey N, Raemaekers M. Decoding Single and Paired Phonemes Using 7T Functional MRI. Brain Topogr 2024:10.1007/s10548-024-01034-6. [PMID: 38261272 DOI: 10.1007/s10548-024-01034-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.
Collapse
Affiliation(s)
- Maria Araújo Vitória
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Francisco Guerreiro Fernandes
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max van den Boom
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | - Nick Ramsey
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Mathijs Raemaekers
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
6
|
Barbieri E, Lukic S, Rogalski E, Weintraub S, Mesulam MM, Thompson CK. Neural mechanisms of sentence production: a volumetric study of primary progressive aphasia. Cereb Cortex 2024; 34:bhad470. [PMID: 38100360 PMCID: PMC10793577 DOI: 10.1093/cercor/bhad470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 12/17/2023] Open
Abstract
Studies on the neural bases of sentence production have yielded mixed results, partly due to differences in tasks and participant types. In this study, 101 individuals with primary progressive aphasia (PPA) were evaluated using a test that required spoken production following an auditory prime (Northwestern Assessment of Verbs and Sentences-Sentence Production Priming Test, NAVS-SPPT), and one that required building a sentence by ordering word cards (Northwestern Anagram Test, NAT). Voxel-Based Morphometry revealed that gray matter (GM) volume in left inferior/middle frontal gyri (L IFG/MFG) was associated with sentence production accuracy on both tasks, more so for complex sentences, whereas, GM volume in left posterior temporal regions was exclusively associated with NAVS-SPPT performance and predicted by performance on a Digit Span Forward (DSF) task. Verb retrieval deficits partly mediated the relationship between L IFG/MFG and performance on the NAVS-SPPT. These findings underscore the importance of L IFG/MFG for sentence production and suggest that this relationship is partly accounted for by verb retrieval deficits, but not phonological loop integrity. In contrast, it is possible that the posterior temporal cortex is associated with auditory short-term memory ability, to the extent that DSF performance is a valid measure of this in aphasia.
Collapse
Affiliation(s)
- Elena Barbieri
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
| | - Sladjana Lukic
- Department of Communication Sciences and Disorders, Adelphi University, 158 Cambridge Avenue, Garden City, NY 11530, United States
| | - Emily Rogalski
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
| | - Sandra Weintraub
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
- Department of Psychiatry and Behavioral Sciences, Northwestern University, 676 N Saint Clair Street, Chicago, IL 60611, United States
| | - Marek-Marsel Mesulam
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
- Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
| | - Cynthia K Thompson
- Mesulam Center for Cognitive Neurology and Alzheimer’s Disease, Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
- Department of Neurology, Northwestern University, 300 E Superior Street, Chicago, IL 60611, United States
- Department of Communication Sciences and Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, United States
| |
Collapse
|
7
|
Ding H, Hamel AP, Karjadi C, Ang TFA, Lu S, Thomas RJ, Au R, Lin H. Association Between Acoustic Features and Brain Volumes: the Framingham Heart Study. FRONTIERS IN DEMENTIA 2023; 2:1214940. [PMID: 38911669 PMCID: PMC11192548 DOI: 10.3389/frdem.2023.1214940] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/25/2024]
Abstract
Introduction Although brain magnetic resonance imaging (MRI) is a valuable tool for investigating structural changes in the brain associated with neurodegeneration, the development of non-invasive and cost-effective alternative methods for detecting early cognitive impairment is crucial. The human voice has been increasingly used as an indicator for effectively detecting cognitive disorders, but it remains unclear whether acoustic features are associated with structural neuroimaging. Methods This study aims to investigate the association between acoustic features and brain volume and compare the predictive power of each for mild cognitive impairment (MCI) in a large community-based population. The study included participants from the Framingham Heart Study (FHS) who had at least one voice recording and an MRI scan. Sixty-five acoustic features were extracted with the OpenSMILE software (v2.1.3) from each voice recording. Nine MRI measures were derived according to the FHS MRI protocol. We examined the associations between acoustic features and MRI measures using linear regression models adjusted for age, sex, and education. Acoustic composite scores were generated by combining acoustic features significantly associated with MRI measures. The MCI prediction ability of acoustic composite scores and MRI measures were compared by building random forest models and calculating the mean area under the receiver operating characteristic curve (AUC) of 10-fold cross-validation. Results The study included 4,293 participants (age 57 ± 13 years, 53.9% women). During 9.3±3.7 years follow-up, 106 participants were diagnosed with MCI. Seven MRI measures were significantly associated with more than 20 acoustic features after adjusting for multiple testing. The acoustic composite scores can improve the AUC for MCI prediction to 0.794, compared to 0.759 achieved by MRI measures. Discussion We found multiple acoustic features were associated with MRI measures, suggesting the potential for using acoustic features as easily accessible digital biomarkers for the early diagnosis of MCI.
Collapse
Affiliation(s)
- Huitong Ding
- Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
| | - Alexander P Hamel
- Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
| | - Cody Karjadi
- Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
| | - Ting F. A. Ang
- Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
- Slone Epidemiology Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
| | - Sophia Lu
- Slone Epidemiology Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
| | - Robert J. Thomas
- Department of Medicine, Division of Pulmonary, Critical Care & Sleep Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Rhoda Au
- Department of Anatomy and Neurobiology, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- Department of Epidemiology, Boston University School of Public Health, Boston, MA, USA
- Slone Epidemiology Center, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- Departments of Neurology and Medicine, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
| | - Honghuang Lin
- The Framingham Heart Study, Boston University Chobanian & Avedisian School of Medicine, Boston, MA, USA
- Department of Medicine, University of Massachusetts Chan Medical School, Worcester, MA, USA
| |
Collapse
|
8
|
Kurteff GL, Lester-Smith RA, Martinez A, Currens N, Holder J, Villarreal C, Mercado VR, Truong C, Huber C, Pokharel P, Hamilton LS. Speaker-induced Suppression in EEG during a Naturalistic Reading and Listening Task. J Cogn Neurosci 2023; 35:1538-1556. [PMID: 37584593 DOI: 10.1162/jocn_a_02037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.
Collapse
|
9
|
Nallaluthan V, Tan GY, Murni MF, Saleh U, Abdul Halim S, Idris Z, Ghani ARI, Abdullah JM. Pain as a Guide in Glasgow Coma Scale Status for Neurological Assessment. Malays J Med Sci 2023; 30:221-235. [PMID: 37928790 PMCID: PMC10624430 DOI: 10.21315/mjms2023.30.5.18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 05/14/2023] [Indexed: 11/07/2023] Open
Abstract
Neurological status is essential and often challenging for neurosurgical residents and also for neurosurgeons to determine surgical management. Pain as a component of the Glasgow Coma Scale (GCS) can be used as a tool in patients, especially an unconscious or comatose patient. In order to elicit this adequate noxious stimulus, a certain amount of pressure-pain threshold is required upon performing either as the central or peripheral technique. The scientific explanation behind each technique is required and needs to be well understood to aid the localisation of the defect in the neurological system. This paper will briefly review the aid of pain as a neurological guide in GCS status assessment.
Collapse
Affiliation(s)
- Vasu Nallaluthan
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Department of Neurosurgery, Hospital Sungai Buloh, Selangor, Malaysia
| | - Guan Yan Tan
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Department of Neurosurgery, Hospital Tengku Ampuan Afzan, Pahang, Malaysia
| | - Mohamed Fuad Murni
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Department of Neurosurgery, Hospital Sultanah Aminah, Johor, Malaysia
| | - Umaira Saleh
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Department of Neurosurgery, Hospital Pulau Pinang, Pulau Pinang, Malaysia
| | - Sanihah Abdul Halim
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Neurology Unit, Department of Internal Medicine, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
| | - Zamzuri Idris
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
| | - Abdul Rahman Izaini Ghani
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
| | - Jafri Malin Abdullah
- Department of Neurosciences, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
- Hospital Universiti Sains Malaysia, Universiti Sains Malaysia, Kelantan, Malaysia
- Brain and Behaviour Cluster, School of Medical Sciences, Universiti Sains Malaysia, Kelantan, Malaysia
| |
Collapse
|
10
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
11
|
Littlejohn M, Maas E. How to cut the pie is no piece of cake: Toward a process-oriented approach to assessment and diagnosis of speech sound disorders. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2023. [PMID: 37483105 DOI: 10.1111/1460-6984.12934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Accepted: 06/29/2023] [Indexed: 07/25/2023]
Abstract
BACKGROUND 'Speech sound disorder' is an umbrella term that encompasses dysarthria, articulation disorders, childhood apraxia of speech and phonological disorders. However, differential diagnosis between these disorders is a persistent challenge in speech pathology, as many diagnostic procedures use symptom clusters instead of identifying an origin of breakdown in the speech and language system. AIMS This article reviews typical and disordered speech through the lens of two well-developed models of production-one focused on phonological encoding and one focused on speech motor planning. We illustrate potential breakdown locations within these models that may relate to childhood apraxia of speech and phonological disorders. MAIN CONTRIBUTION This paper presents an overview of an approach to conceptualisation of speech sound disorders that is grounded in current models of speech production and emphasises consideration of underlying processes. The paper also sketches a research agenda for the development of valid, reliable and clinically feasible assessment protocols for children with speech sound disorders. CONCLUSION The process-oriented approach outlined here is in the early stages of development but holds promise for developing a more detailed and comprehensive understanding of, and assessment protocols for speech sound disorders that go beyond broad diagnostic labels based on error analysis. Directions for future research are discussed. WHAT THIS PAPER ADDS What is already known on the subject Speech sound disorders (SSD) are heterogeneous, and there is agreement that some children have a phonological impairment (phonological disorders, PD) whereas others have an impairment of speech motor planning (childhood apraxia of speech, CAS). There is also recognition that speech production involves multiple processes, and several approaches to the assessment and diagnosis of SSD have been proposed. What this paper adds to existing knowledge This paper provides a more detailed conceptualisation of potential impairments in children with SSD that is grounded in current models of speech production and encourages greater consideration of underlying processes. The paper illustrates this approach and provides guidance for further development. One consequence of this perspective is the notion that broad diagnostic category labels (PD, CAS) may each comprise different subtypes or profiles depending on the processes that are affected. What are the potential or actual clinical implications of this work? Although the approach is in the early stages of development and no comprehensive validated set of tasks and measures is available to assess all processes, clinicians may find the conceptualisation of different underlying processes and the notion of potential subtypes within PD and CAS informative when evaluating SSD. In addition, this perspective discourages either/or thinking (PD or CAS) and instead encourages consideration of the possibility that children may have different combinations of impairments at different processing stages.
Collapse
Affiliation(s)
- Meghan Littlejohn
- Department of Communication Sciences and Disorders, Temple University, Philadelphia, Pennsylvania, USA
| | - Edwin Maas
- Department of Communication Sciences and Disorders, Temple University, Philadelphia, Pennsylvania, USA
| |
Collapse
|
12
|
Cuadros J, Z-Rivera L, Castro C, Whitaker G, Otero M, Weinstein A, Martínez-Montes E, Prado P, Zañartu M. DIVA Meets EEG: Model Validation Using Formant-Shift Reflex. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:7512. [PMID: 38435340 PMCID: PMC10906992 DOI: 10.3390/app13137512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
The neurocomputational model 'Directions into Velocities of Articulators' (DIVA) was developed to account for various aspects of normal and disordered speech production and acquisition. The neural substrates of DIVA were established through functional magnetic resonance imaging (fMRI), providing physiological validation of the model. This study introduces DIVA_EEG an extension of DIVA that utilizes electroencephalography (EEG) to leverage the high temporal resolution and broad availability of EEG over fMRI. For the development of DIVA_EEG, EEG-like signals were derived from original equations describing the activity of the different DIVA maps. Synthetic EEG associated with the utterance of syllables was generated when both unperturbed and perturbed auditory feedback (first formant perturbations) were simulated. The cortical activation maps derived from synthetic EEG closely resembled those of the original DIVA model. To validate DIVA_EEG, the EEG of individuals with typical voices (N = 30) was acquired during an altered auditory feedback paradigm. The resulting empirical brain activity maps significantly overlapped with those predicted by DIVA_EEG. In conjunction with other recent model extensions, DIVA_EEG lays the foundations for constructing a complete neurocomputational framework to tackle vocal and speech disorders, which can guide model-driven personalized interventions.
Collapse
Affiliation(s)
- Jhosmary Cuadros
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Grupo de Bioingeniería, Decanato de Investigación, Universidad Nacional Experimental del Táchira, San Cristóbal 5001, Venezuela
| | - Lucía Z-Rivera
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Christian Castro
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Grace Whitaker
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| | - Mónica Otero
- Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, Santiago 8420524, Chile
- Centro Basal Ciencia & Vida, Universidad San Sebastián, Santiago 8580000, Chile
| | - Alejandro Weinstein
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | | | - Pavel Prado
- Escuela de Fonoaudiología, Facultad de Odontología y Ciencias de la Rehabilitación, Universidad San Sebastián, Santiago 7510602, Chile
| | - Matías Zañartu
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| |
Collapse
|
13
|
Marangolo P, Vasta S, Manfredini A, Caltagirone C. What Else Can Be Done by the Spinal Cord? A Review on the Effectiveness of Transpinal Direct Current Stimulation (tsDCS) in Stroke Recovery. Int J Mol Sci 2023; 24:10173. [PMID: 37373323 DOI: 10.3390/ijms241210173] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 06/08/2023] [Accepted: 06/13/2023] [Indexed: 06/29/2023] Open
Abstract
Since the spinal cord has traditionally been considered a bundle of long fibers connecting the brain to all parts of the body, the study of its role has long been limited to peripheral sensory and motor control. However, in recent years, new studies have challenged this view pointing to the spinal cord's involvement not only in the acquisition and maintenance of new motor skills but also in the modulation of motor and cognitive functions dependent on cortical motor regions. Indeed, several reports to date, which have combined neurophysiological techniques with transpinal direct current stimulation (tsDCS), have shown that tsDCS is effective in promoting local and cortical neuroplasticity changes in animals and humans through the activation of ascending corticospinal pathways that modulate the sensorimotor cortical networks. The aim of this paper is first to report the most prominent tsDCS studies on neuroplasticity and its influence at the cortical level. Then, a comprehensive review of tsDCS literature on motor improvement in animals and healthy subjects and on motor and cognitive recovery in post-stroke populations is presented. We believe that these findings might have an important impact in the future making tsDCS a potential suitable adjunctive approach for post-stroke recovery.
Collapse
Affiliation(s)
- Paola Marangolo
- Department of Humanities Studies, University Federico II, 80133 Naples, Italy
| | - Simona Vasta
- Department of Psychology, Sapienza University of Rome, 00185 Rome, Italy
| | - Alessio Manfredini
- Department of Humanities Studies, University Federico II, 80133 Naples, Italy
| | | |
Collapse
|
14
|
Terband H, van Brenk F. Modeling Responses to Auditory Feedback Perturbations in Adults, Children, and Children With Complex Speech Sound Disorders: Evidence for Impaired Auditory Self-Monitoring? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1563-1587. [PMID: 37071803 DOI: 10.1044/2023_jslhr-22-00379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE Previous studies have found that typically developing (TD) children were able to compensate and adapt to auditory feedback perturbations to a similar or larger degree compared to young adults, while children with speech sound disorder (SSD) were found to produce predominantly following responses. However, large individual differences lie underneath the group-level results. This study investigates possible mechanisms in responses to formant shifts by modeling parameters of feedback and feedforward control of speech production based on behavioral data. METHOD SimpleDIVA was used to model an existing dataset of compensation/adaptation behavior to auditory feedback perturbations collected from three groups of Dutch speakers: 50 young adults, twenty-three 4- to 8-year-old children with TD speech, and seven 4- to 8-year-old children with SSD. Between-groups and individual within-group differences in model outcome measures representing auditory and somatosensory feedback control gain and feedforward learning rate were assessed. RESULTS Notable between-groups and within-group variation was found for all outcome measures. Data modeled for individual speakers yielded model fits with varying reliability. Auditory feedback control gain was negative in children with SSD and positive in both other groups. Somatosensory feedback control gain was negative for both groups of children and marginally negative for adults. Feedforward learning rate measures were highest in the children with TD speech followed by children with SSD, compared to adults. CONCLUSIONS The SimpleDIVA model was able to account for responses to the perturbation of auditory feedback other than corrective, as negative auditory feedback control gains were associated with following responses to vowel shifts. These preliminary findings are suggestive of impaired auditory self-monitoring in children with complex SSD. Possible mechanisms underlying the nature of following responses are discussed.
Collapse
Affiliation(s)
- Hayo Terband
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Frits van Brenk
- Faculty of Humanities, Department of Languages, Literature and Communication & Institute for Language Sciences, Utrecht University, the Netherlands
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| |
Collapse
|
15
|
Teghipco A, Okada K, Murphy E, Hickok G. Predictive Coding and Internal Error Correction in Speech Production. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:81-119. [PMID: 37229143 PMCID: PMC10205072 DOI: 10.1162/nol_a_00088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/16/2022] [Accepted: 11/02/2022] [Indexed: 05/27/2023]
Abstract
Speech production involves the careful orchestration of sophisticated systems, yet overt speech errors rarely occur under naturalistic conditions. The present functional magnetic resonance imaging study sought neural evidence for internal error detection and correction by leveraging a tongue twister paradigm that induces the potential for speech errors while excluding any overt errors from analysis. Previous work using the same paradigm in the context of silently articulated and imagined speech production tasks has demonstrated forward predictive signals in auditory cortex during speech and presented suggestive evidence of internal error correction in left posterior middle temporal gyrus (pMTG) on the basis that this area tended toward showing a stronger response when potential speech errors are biased toward nonwords compared to words (Okada et al., 2018). The present study built on this prior work by attempting to replicate the forward prediction and lexicality effects in nearly twice as many participants but introduced novel stimuli designed to further tax internal error correction and detection mechanisms by biasing speech errors toward taboo words. The forward prediction effect was replicated. While no evidence was found for a significant difference in brain response as a function of lexical status of the potential speech error, biasing potential errors toward taboo words elicited significantly greater response in left pMTG than biasing errors toward (neutral) words. Other brain areas showed preferential response for taboo words as well but responded below baseline and were less likely to reflect language processing as indicated by a decoding analysis, implicating left pMTG in internal error correction.
Collapse
Affiliation(s)
- Alex Teghipco
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| | - Kayoko Okada
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Emma Murphy
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA, USA
| |
Collapse
|
16
|
Yap SM, Davenport L, Cogley C, Craddock F, Kennedy A, Gaughan M, Kearney H, Tubridy N, De Looze C, O'Keeffe F, Reilly RB, McGuigan C. Word finding, prosody and social cognition in multiple sclerosis. J Neuropsychol 2023; 17:32-62. [PMID: 35822290 DOI: 10.1111/jnp.12285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Accepted: 03/29/2022] [Indexed: 10/17/2022]
Abstract
BACKGROUND Impairments in speech and social cognition have been reported in people with multiple sclerosis (pwMS), although their relationships with neuropsychological outcomes and their clinical utility in MS are unclear. OBJECTIVES To evaluate word finding, prosody and social cognition in pwMS relative to healthy controls (HC). METHODS We recruited people with relapsing MS (RMS, n = 21), progressive MS (PMS, n = 24) and HC (n = 25) from an outpatient MS clinic. Participants completed a battery of word-finding, social cognitive, neuropsychological and clinical assessments and performed a speech task for prosodic analysis. RESULTS Of 45 pwMS, mean (SD) age was 49.4 (9.4) years, and median (range) Expanded Disability Severity Scale score was 3.5 (1.0-6.5). Compared with HC, pwMS were older and had slower information processing speed (measured with the Symbol Digit Modalities Test, SDMT) and higher depression scores. Most speech and social cognitive measures were associated with information processing speed but not with depression. Unlike speech, social cognition consistently correlated with intelligence and memory. Visual naming test mean response time (VNT-MRT) demonstrated worse outcomes in MS versus HC (p = .034, Nagelkerke's R2 = 65.0%), and in PMS versus RMS (p = .009, Nagelkerke's R2 = 50.2%). Rapid automatised object naming demonstrated worse outcomes in MS versus HC (p = .014, Nagelkerke's R2 = 49.1%). These word-finding measures showed larger effect sizes than that of the SDMT (MS vs. HC, p = .010, Nagelkerke's R2 = 40.6%; PMS vs. RMS, p = .023, Nagelkerke's R2 = 43.5%). Prosody and social cognition did not differ between MS and HC. CONCLUSIONS Word finding, prosody and social cognition in MS are associated with information processing speed and largely independent of mood. Impairment in visual object meaning perception is potentially a unique MS disease-related deficit that could be further explored and cautiously considered as an adjunct disability metric for MS.
Collapse
Affiliation(s)
- Siew Mei Yap
- Department of Neurology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Medicine, University College Dublin, Dublin, Ireland
| | - Laura Davenport
- Neuropsychology Service, Department of Psychology, St. Vincent's University Hospital, Dublin 4, Ireland
| | - Clodagh Cogley
- Neuropsychology Service, Department of Psychology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Psychology, University College Dublin, Dublin, Ireland
| | - Fiona Craddock
- Neuropsychology Service, Department of Psychology, St. Vincent's University Hospital, Dublin 4, Ireland
| | - Alex Kennedy
- Trinity Centre for Biomedical Engineering, Trinity College, The University of Dublin, Dublin 2, Ireland
| | - Maria Gaughan
- Department of Neurology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Medicine, University College Dublin, Dublin, Ireland
| | - Hugh Kearney
- Department of Neurology, St. Vincent's University Hospital, Dublin 4, Ireland
| | - Niall Tubridy
- Department of Neurology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Medicine, University College Dublin, Dublin, Ireland
| | - Céline De Looze
- Trinity Centre for Biomedical Engineering, Trinity College, The University of Dublin, Dublin 2, Ireland
| | - Fiadhnait O'Keeffe
- Neuropsychology Service, Department of Psychology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Psychology, University College Dublin, Dublin, Ireland
| | - Richard B Reilly
- Trinity Centre for Biomedical Engineering, Trinity College, The University of Dublin, Dublin 2, Ireland.,School of Medicine, Trinity College, The University of Dublin, Dublin 2, Ireland.,School of Engineering, Trinity College, The University of Dublin, Dublin 2, Ireland
| | - Christopher McGuigan
- Department of Neurology, St. Vincent's University Hospital, Dublin 4, Ireland.,School of Medicine, University College Dublin, Dublin, Ireland
| |
Collapse
|
17
|
Phoneme Representation and Articulatory Impairment: Insights from Adults with Comorbid Motor Coordination Disorder and Dyslexia. Brain Sci 2023; 13:brainsci13020210. [PMID: 36831753 PMCID: PMC9954044 DOI: 10.3390/brainsci13020210] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/19/2023] [Accepted: 01/25/2023] [Indexed: 01/28/2023] Open
Abstract
Phonemic processing skills are impaired both in children and adults with dyslexia. Since phoneme representation development is based on articulatory gestures, it is likely that these gestures influence oral reading-related skills as assessed through phonemic awareness tasks. In our study, fifty-two young dyslexic adults, with and without motor impairment, and fifty-nine skilled readers performed reading, phonemic awareness, and articulatory tasks. The two dyslexic groups exhibited slower articulatory rates than skilled readers and the comorbid dyslexic group presenting with an additional difficulty in respiratory control (reduced speech proportion and increased pause duration). Two versions of the phoneme awareness task (PAT) with pseudoword strings were administered: a classical version under time pressure and a delayed version in which access to phonemic representations and articulatory programs was facilitated. The two groups with dyslexia were outperformed by the control group in both versions. Although the two groups with dyslexia performed equally well on the classical PAT, the comorbid group performed significantly less efficiently on the delayed PAT, suggesting an additional contribution of articulatory impairment in the task for this group. Overall, our results suggest that impaired phoneme representations in dyslexia may be explained, at least partially, by articulatory deficits affecting access to them.
Collapse
|
18
|
Volfart A, McMahon KL, Howard D, de Zubicaray GI. Neural Correlates of Naturally Occurring Speech Errors during Picture Naming in Healthy Participants. J Cogn Neurosci 2022; 35:111-127. [PMID: 36306259 DOI: 10.1162/jocn_a_01927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Most of our knowledge about the neuroanatomy of speech errors comes from lesion-symptom mapping studies in people with aphasia and laboratory paradigms designed to elicit primarily phonological errors in healthy adults, with comparatively little evidence from naturally occurring speech errors. In this study, we analyzed perfusion fMRI data from 24 healthy participants during a picture naming task, classifying their responses into correct and different speech error types (e.g., semantic, phonological, omission errors). Total speech errors engaged a wide set of left-lateralized frontal, parietal, and temporal regions that were almost identical to those involved during the production of correct responses. We observed significant perfusion signal decreases in the left posterior middle temporal gyrus and inferior parietal lobule (angular gyrus) for semantic errors compared to correct trials matched on various psycholinguistic variables. In addition, the left dorsal caudate nucleus showed a significant perfusion signal decrease for omission (i.e., anomic) errors compared with matched correct trials. Surprisingly, we did not observe any significant perfusion signal changes in brain regions proposed to be associated with monitoring mechanisms during speech production (e.g., ACC, superior temporal gyrus). Overall, our findings provide evidence for distinct neural correlates of semantic and omission error types, with anomic speech errors likely resulting from failures to initiate articulatory-motor processes rather than semantic knowledge impairments as often reported for people with aphasia.
Collapse
Affiliation(s)
| | - Katie L McMahon
- Queensland University of Technology.,Royal Brisbane & Women's Hospital
| | | | | |
Collapse
|
19
|
Kearney E, Nieto-Castañón A, Falsini R, Daliri A, Heller Murray ES, Smith DJ, Guenther FH. Quantitatively characterizing reflexive responses to pitch perturbations. Front Hum Neurosci 2022; 16:929687. [PMID: 36405080 PMCID: PMC9666385 DOI: 10.3389/fnhum.2022.929687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Accepted: 10/04/2022] [Indexed: 11/06/2022] Open
Abstract
Background Reflexive pitch perturbation experiments are commonly used to investigate the neural mechanisms underlying vocal motor control. In these experiments, the fundamental frequency–the acoustic correlate of pitch–of a speech signal is shifted unexpectedly and played back to the speaker via headphones in near real-time. In response to the shift, speakers increase or decrease their fundamental frequency in the direction opposing the shift so that their perceived pitch is closer to what they intended. The goal of the current work is to develop a quantitative model of responses to reflexive perturbations that can be interpreted in terms of the physiological mechanisms underlying the response and that captures both group-mean data and individual subject responses. Methods A model framework was established that allowed the specification of several models based on Proportional-Integral-Derivative and State-Space/Directions Into Velocities of Articulators (DIVA) model classes. The performance of 19 models was compared in fitting experimental data from two published studies. The models were evaluated in terms of their ability to capture both population-level responses and individual differences in sensorimotor control processes. Results A three-parameter DIVA model performed best when fitting group-mean data from both studies; this model is equivalent to a single-rate state-space model and a first-order low pass filter model. The same model also provided stable estimates of parameters across samples from individual subject data and performed among the best models to differentiate between subjects. The three parameters correspond to gains in the auditory feedback controller’s response to a perceived error, the delay of this response, and the gain of the somatosensory feedback controller’s “resistance” to this correction. Excellent fits were also obtained from a four-parameter model with an additional auditory velocity error term; this model was better able to capture multi-component reflexive responses seen in some individual subjects. Conclusion Our results demonstrate the stereotyped nature of an individual’s responses to pitch perturbations. Further, we identified a model that captures population responses to pitch perturbations and characterizes individual differences in a stable manner with parameters that relate to underlying motor control capabilities. Future work will evaluate the model in characterizing responses from individuals with communication disorders.
Collapse
Affiliation(s)
- Elaine Kearney
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
- *Correspondence: Elaine Kearney,
| | - Alfonso Nieto-Castañón
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
- The McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, United States
| | - Riccardo Falsini
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
| | - Ayoub Daliri
- College of Health Solutions, Arizona State University, Tempe, AZ, United States
| | | | - Dante J. Smith
- Gradutate Program for Neuroscience, Boston University, Boston, MA, United States
| | - Frank H. Guenther
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA, United States
- Department of Biomedical Engineering, Boston University, Boston, MA, United States
- The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, United States
| |
Collapse
|
20
|
Ivkovic N, Martinovic D, Kozina S, Lupi-Ferandin S, Tokic D, Usljebrka M, Kumric M, Bozic J. Quality of Life and Aesthetic Satisfaction in Patients Who Underwent the “Commando Operation” with Pectoralis Major Myocutaneus Flap Reconstruction—A Case Series Study. Healthcare (Basel) 2022; 10:healthcare10091737. [PMID: 36141349 PMCID: PMC9498799 DOI: 10.3390/healthcare10091737] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 09/01/2022] [Accepted: 09/09/2022] [Indexed: 11/16/2022] Open
Abstract
The “commando operation” is an extensive surgical procedure used to treat patients with oral squamous carcinoma and metastasis in the cervical lymph nodes. While the procedure can be curative, it is also very mutilating, which consequently has a major impact on the patient’s quality of life. Several studies showed that the procedure is associated with loss of certain functions, such as impairments in speech, chewing, swallowing, and loss of taste and appetite. Furthermore, some of these impairments and their degree depend on the reconstruction method. However, the data regarding the functional impairments and aesthetic results in patients who underwent the “commando operation” along with the pectoralis major myocutaneus flap reconstruction are still inconclusive. This study included 34 patients that underwent partial glossectomy, ipsilateral modified radical neck dissection, pectoralis major myocutaneus flap reconstruction, and adjuvant radiotherapy. A structured questionnaire was used to evaluate aesthetical results and functional impairments as well as to grade the level of satisfaction with the functional and aesthetic outcomes both by the patients and by the operator. Most of the patients stated that their speech (N = 33; 97%) and salivation (N = 32; 94.2%) severely changed after the operation and that they cannot chew (N = 33; 97%) and swallow (N = 33; 97%) the same as before the operation. Moreover, almost half of the patients (N = 16; 47%) reported that they have severe sleep impairments. However, only few of the included patients stated that they sought professional help regarding the speech (N = 4; 11.7%), eating (N = 5; 14.7%), and sleeping (N = 4; 11.7%) disturbances. Additionally, there was a statistically significant difference between the operator and the patients in the subjective assessment of the aesthetic results (p = 0.047), as operators gave significantly better grades. Our results imply that this procedure and reconstructive method possibly cause impairments that have an impact on the patients’ wellbeing. Moreover, our outcomes also suggest that patients should be educated and rehabilitated after the “commando operation” since most of them were reluctant to seek professional help regarding their impairments. Lastly, sleep deficiency, which was observed after the procedure, should be further explored.
Collapse
Affiliation(s)
- Natalija Ivkovic
- Department of Otorhinolaryngology, University Hospital of Split, 21000 Split, Croatia
- Sleep Medicine Center, University of Split School of Medicine, 21000 Split, Croatia
| | - Dinko Martinovic
- Department of Maxillofacial Surgery, University Hospital of Split, 21000 Split, Croatia
| | - Slavica Kozina
- Department of Psychological Medicine, University of Split School of Medicine, 21000 Split, Croatia
| | - Slaven Lupi-Ferandin
- Department of Maxillofacial Surgery, University Hospital of Split, 21000 Split, Croatia
| | - Daria Tokic
- Department of Anesthesiology and Intensive Care, University Hospital of Split, 21000 Split, Croatia
| | - Mislav Usljebrka
- Department of Maxillofacial Surgery, University Hospital of Split, 21000 Split, Croatia
| | - Marko Kumric
- Department of Pathophysiology, University of Split School of Medicine, 21000 Split, Croatia
| | - Josko Bozic
- Department of Pathophysiology, University of Split School of Medicine, 21000 Split, Croatia
- Correspondence: ; Tel.: +385-21-557-871
| |
Collapse
|
21
|
Zhou Y, Zhao Z, Zhang J, Hameed NUF, Zhu F, Feng R, Zhang X, Lu J, Wu J. Electrical stimulation-induced speech-related negative motor responses in the lateral frontal cortex. J Neurosurg 2022; 137:496-504. [PMID: 34952509 DOI: 10.3171/2021.9.jns211069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/30/2021] [Indexed: 11/06/2022]
Abstract
OBJECTIVE Speech arrest is a common but crucial negative motor response (NMR) recorded during intraoperative brain mapping. However, recent studies have reported nonspeech-specific NMR sites in the ventral precentral gyrus (vPrCG), where stimulation halts both speech and ongoing hand movement. The aim of this study was to investigate the spatial relationship between speech-specific NMR sites and nonspeech-specific NMR sites in the lateral frontal cortex. METHODS In this prospective cohort study, an intraoperative mapping strategy was designed to identify positive motor response (PMR) sites and NMR sites in 33 consecutive patients undergoing awake craniotomy for the treatment of left-sided gliomas. Patients were asked to count, flex their hands, and simultaneously perform these two tasks to map NMRs. Each site was plotted onto a standard atlas and further analyzed. The speech and hand motor arrest sites in the supplementary motor area of 2 patients were resected. The 1- and 3-month postoperative language and motor functions of all patients were assessed. RESULTS A total of 91 PMR sites and 72 NMR sites were identified. NMR and PMR sites were anteroinferiorly and posterosuperiorly distributed in the precentral gyrus, respectively. Three distinct NMR sites were identified: 24 pure speech arrest (speech-specific NMR) sites (33.33%), 7 pure hand motor arrest sites (9.72%), and 41 speech and hand motor arrest (nonspeech-specific NMR) sites (56.94%). Nonspeech-specific NMR sites and speech-specific NMR sites were dorsoventrally distributed in the vPrCG. For language function, 1 of 2 patients in the NMA resection group had language dysfunction at the 1-month follow-up but had recovered by the 3-month follow-up. All patients in the NMA resection group had fine motor dysfunction at the 1- and 3-month follow-ups. CONCLUSIONS The study results demonstrated a functional segmentation of speech-related NMRs in the lateral frontal cortex and that most of the stimulation-induced speech arrest sites are not specific to speech. A better understanding of the spatial distribution of speech-related NMR sites will be helpful in surgical planning and intraoperative mapping and provide in-depth insight into the motor control of speech production.
Collapse
Affiliation(s)
- Yuyao Zhou
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
| | - Zehao Zhao
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
| | - Jie Zhang
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
| | - N U Farrukh Hameed
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
| | - Fengping Zhu
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
| | - Rui Feng
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
| | - Xiaoluo Zhang
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
| | - Junfeng Lu
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
- 3Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| | - Jinsong Wu
- 1Neurologic Surgery Department, Huashan Hospital, Fudan University
- 2Brain Function Laboratory, Neurosurgical Institute of Fudan University
- 3Shanghai Key Laboratory of Brain Function Restoration and Neural Regeneration, Shanghai, China
| |
Collapse
|
22
|
Garnett EO, Chow HM, Limb S, Liu Y, Chang SE. Neural activity during solo and choral reading: A functional magnetic resonance imaging study of overt continuous speech production in adults who stutter. Front Hum Neurosci 2022; 16:894676. [PMID: 35937674 PMCID: PMC9353050 DOI: 10.3389/fnhum.2022.894676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 06/27/2022] [Indexed: 01/22/2023] Open
Abstract
Previous neuroimaging investigations of overt speech production in adults who stutter (AWS) found increased motor and decreased auditory activity compared to controls. Activity in the auditory cortex is heightened, however, under fluency-inducing conditions in which AWS temporarily become fluent while synchronizing their speech with an external rhythm, such as a metronome or another speaker. These findings suggest that stuttering is associated with disrupted auditory motor integration. Technical challenges in acquiring neuroimaging data during continuous overt speech production have limited experimental paradigms to short or covert speech tasks. Such paradigms are not ideal, as stuttering primarily occurs during longer speaking tasks. To address this gap, we used a validated spatial ICA technique designed to address speech movement artifacts during functional magnetic resonance imaging (fMRI) scanning. We compared brain activity and functional connectivity of the left auditory cortex during continuous speech production in two conditions: solo (stutter-prone) and choral (fluency-inducing) reading tasks. Overall, brain activity differences in AWS relative to controls in the two conditions were similar, showing expected patterns of hyperactivity in premotor/motor regions but underactivity in auditory regions. Functional connectivity of the left auditory cortex (STG) showed that within the AWS group there was increased correlated activity with the right insula and inferior frontal area during choral speech. The AWS also exhibited heightened connectivity between left STG and key regions of the default mode network (DMN) during solo speech. These findings indicate possible interference by the DMN during natural, stuttering-prone speech in AWS, and that enhanced coordination between auditory and motor regions may support fluent speech.
Collapse
Affiliation(s)
- Emily O. Garnett
- Michigan Medicine, Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
- *Correspondence: Emily O. Garnett,
| | - Ho Ming Chow
- Michigan Medicine, Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
- Department of Communication Sciences and Disorders, University of Delaware, Newark, DE, United States
| | - Sarah Limb
- Michigan Medicine, Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
| | - Yanni Liu
- Michigan Medicine, Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
| | - Soo-Eun Chang
- Michigan Medicine, Department of Psychiatry, University of Michigan, Ann Arbor, MI, United States
| |
Collapse
|
23
|
Chenausky KV, Tager-Flusberg H. The importance of deep speech phenotyping for neurodevelopmental and genetic disorders: a conceptual review. J Neurodev Disord 2022; 14:36. [PMID: 35690736 PMCID: PMC9188130 DOI: 10.1186/s11689-022-09443-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 05/06/2022] [Indexed: 01/08/2023] Open
Abstract
Background Speech is the most common modality through which language is communicated, and delayed, disordered, or absent speech production is a hallmark of many neurodevelopmental and genetic disorders. Yet, speech is not often carefully phenotyped in neurodevelopmental disorders. In this paper, we argue that such deep phenotyping, defined as phenotyping that is specific to speech production and not conflated with language or cognitive ability, is vital if we are to understand how genetic variations affect the brain regions that are associated with spoken language. Speech is distinct from language, though the two are related behaviorally and share neural substrates. We present a brief taxonomy of developmental speech production disorders, with particular emphasis on the motor speech disorders childhood apraxia of speech (a disorder of motor planning) and childhood dysarthria (a set of disorders of motor execution). We review the history of discoveries concerning the KE family, in whom a hereditary form of communication impairment was identified as childhood apraxia of speech and linked to dysfunction in the FOXP2 gene. The story demonstrates how instrumental deep phenotyping of speech production was in this seminal discovery in the genetics of speech and language. There is considerable overlap between the neural substrates associated with speech production and with FOXP2 expression, suggesting that further genes associated with speech dysfunction will also be expressed in similar brain regions. We then show how a biologically accurate computational model of speech production, in combination with detailed information about speech production in children with developmental disorders, can generate testable hypotheses about the nature, genetics, and neurology of speech disorders. Conclusions Though speech and language are distinct, specific types of developmental speech disorder are associated with far-reaching effects on verbal communication in children with neurodevelopmental disorders. Therefore, detailed speech phenotyping, in collaboration with experts on pediatric speech development and disorders, can lead us to a new generation of discoveries about how speech development is affected in genetic disorders.
Collapse
Affiliation(s)
- Karen V Chenausky
- Speech in Autism and Neurodevelopmental Disorders Lab, Massachusetts General Hospital Institute of Health Professions, 36 1st Avenue, Boston, MA, 02129, USA. .,Department of Neurology, Harvard Medical School, Boston, USA. .,Department of Psychological and Brain Sciences, Boston University, Boston, USA.
| | | |
Collapse
|
24
|
Gulyaev SA. Neurophysiological Solution of the Inverse Problem of EEG Research at Rest and under Conditions of Auditory-Speech Load. J EVOL BIOCHEM PHYS+ 2022. [DOI: 10.1134/s0022093022020259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
25
|
Kröger BJ, Bekolay T, Cao M. On the Emergence of Phonological Knowledge and on Motor Planning and Motor Programming in a Developmental Model of Speech Production. Front Hum Neurosci 2022; 16:844529. [PMID: 35634209 PMCID: PMC9133537 DOI: 10.3389/fnhum.2022.844529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 04/12/2022] [Indexed: 11/13/2022] Open
Abstract
A broad sketch for a model of speech production is outlined which describes developmental aspects of its cognitive-linguistic and sensorimotor components. A description of the emergence of phonological knowledge is a central point in our model sketch. It will be shown that the phonological form level emerges during speech acquisition and becomes an important representation at the interface between cognitive-linguistic and sensorimotor processes. Motor planning as well as motor programming are defined as separate processes in our model sketch and it will be shown that both processes revert to the phonological information. Two computational simulation experiments based on quantitative implementations (simulation models) are undertaken to show proof of principle of key ideas of the model sketch: (i) the emergence of phonological information over developmental stages, (ii) the adaptation process for generating new motor programs, and (iii) the importance of various forms of phonological representation in that process. Based on the ideas developed within our sketch of a production model and its quantitative spell-out within the simulation models, motor planning can be defined here as the process of identifying a succession of executable chunks from a currently activated phoneme sequence and of coding them as raw gesture scores. Motor programming can be defined as the process of building up the complete set of motor commands by specifying all gestures in detail (fully specified gesture score including temporal relations). This full specification of gesture scores is achieved in our model by adapting motor information from phonologically similar syllables (adapting approach) or by assembling motor programs from sub-syllabic units (assembling approach).
Collapse
Affiliation(s)
- Bernd J. Kröger
- Department of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, Aachen, Germany
- *Correspondence: Bernd J. Kröger,
| | | | - Mengxue Cao
- School of Chinese Language and Literature, Beijing Normal University, Beijing, China
| |
Collapse
|
26
|
Beyond the Language Module: Musicality as a Stepping Stone Towards Language Acquisition. EVOLUTIONARY PSYCHOLOGY 2022. [DOI: 10.1007/978-3-030-76000-7_12] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
|
27
|
Treutler M, Sörös P. Functional MRI of Native and Non-native Speech Sound Production in Sequential German-English Bilinguals. Front Hum Neurosci 2021; 15:683277. [PMID: 34349632 PMCID: PMC8326338 DOI: 10.3389/fnhum.2021.683277] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2021] [Accepted: 06/22/2021] [Indexed: 11/13/2022] Open
Abstract
Bilingualism and multilingualism are highly prevalent. Non-invasive brain imaging has been used to study the neural correlates of native and non-native speech and language production, mainly on the lexical and syntactic level. Here, we acquired continuous fast event-related FMRI during visually cued overt production of exclusively German and English vowels and syllables. We analyzed data from 13 university students, native speakers of German and sequential English bilinguals. The production of non-native English sounds was associated with increased activity of the left primary sensorimotor cortex, bilateral cerebellar hemispheres (lobule VI), left inferior frontal gyrus, and left anterior insula compared to native German sounds. The contrast German > English sounds was not statistically significant. Our results emphasize that the production of non-native speech requires additional neural resources already on a basic phonological level in sequential bilinguals.
Collapse
Affiliation(s)
- Miriam Treutler
- European Medical School Oldenburg-Groningen, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Peter Sörös
- Department of Neurology, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany.,Research Center Neurosensory Science, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
28
|
Wiltshire CEE, Chiew M, Chesters J, Healy MP, Watkins KE. Speech Movement Variability in People Who Stutter: A Vocal Tract Magnetic Resonance Imaging Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2438-2452. [PMID: 34157239 PMCID: PMC8323486 DOI: 10.1044/2021_jslhr-20-00507] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Revised: 01/29/2021] [Accepted: 03/01/2021] [Indexed: 06/01/2023]
Abstract
Purpose People who stutter (PWS) have more unstable speech motor systems than people who are typically fluent (PWTF). Here, we used real-time magnetic resonance imaging (MRI) of the vocal tract to assess variability and duration of movements of different articulators in PWS and PWTF during fluent speech production. Method The vocal tracts of 28 adults with moderate to severe stuttering and 20 PWTF were scanned using MRI while repeating simple and complex pseudowords. Midsagittal images of the vocal tract from lips to larynx were reconstructed at 33.3 frames per second. For each participant, we measured the variability and duration of movements across multiple repetitions of the pseudowords in three selected articulators: the lips, tongue body, and velum. Results PWS showed significantly greater speech movement variability than PWTF during fluent repetitions of pseudowords. The group difference was most evident for measurements of lip aperture using these stimuli, as reported previously, but here, we report that movements of the tongue body and velum were also affected during the same utterances. Variability was not affected by phonological complexity. Speech movement variability was unrelated to stuttering severity within the PWS group. PWS also showed longer speech movement durations relative to PWTF for fluent repetitions of multisyllabic pseudowords, and this group difference was even more evident as complexity increased. Conclusions Using real-time MRI of the vocal tract, we found that PWS produced more variable movements than PWTF even during fluent productions of simple pseudowords. PWS also took longer to produce multisyllabic words relative to PWTF, particularly when words were more complex. This indicates general, trait-level differences in the control of the articulators between PWS and PWTF. Supplemental Material https://doi.org/10.23641/asha.14782092.
Collapse
Affiliation(s)
- Charlotte E. E. Wiltshire
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Mark Chiew
- Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, United Kingdom
| | - Jennifer Chesters
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Máiréad P. Healy
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| | - Kate E. Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, Radcliffe Observatory Quarter, University of Oxford, United Kingdom
| |
Collapse
|
29
|
Johnson JF, Belyk M, Schwartze M, Pinheiro AP, Kotz SA. Expectancy changes the self-monitoring of voice identity. Eur J Neurosci 2021; 53:2681-2695. [PMID: 33638190 PMCID: PMC8252045 DOI: 10.1111/ejn.15162] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/18/2021] [Accepted: 02/20/2021] [Indexed: 12/02/2022]
Abstract
Self‐voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice‐morphing (self‐other) to manipulate (un‐)certainty in self‐voice attribution in a button‐press paradigm. This allowed investigating how levels of self‐voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self‐voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self‐voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self‐generated compared to a passively heard voice, the putative role of this region in detecting unexpected self‐voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self‐voice feedback in auditory verbal hallucinations.
Collapse
Affiliation(s)
- Joseph F Johnson
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Michel Belyk
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands.,Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
30
|
Pisano F, Caltagirone C, Incoccia C, Marangolo P. Spinal or cortical direct current stimulation: Which is the best? Evidence from apraxia of speech in post-stroke aphasia. Behav Brain Res 2020; 399:113019. [PMID: 33207242 DOI: 10.1016/j.bbr.2020.113019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Revised: 09/04/2020] [Accepted: 11/11/2020] [Indexed: 10/23/2022]
Abstract
To date, new advances in technology have already shown the effectiveness of non-invasive brain stimulation and, in particular, of transcranial direct current stimulation (tDCS), in enhancing language recovery in post-stroke aphasia. More recently, it has been suggested that the stimulation over the spinal cord improves the production of words associated to sensorimotor schemata, such as action verbs. Here, for the first time, we present evidence that transpinal direct current stimulation (tsDCS) combined with a language training is efficacious for the recovery from speech apraxia, a motor speech disorder which might co-occur with aphasia. In a randomized-double blind experiment, ten aphasics underwent five days of tsDCS with concomitant treatment for their articulatory deficits in two different conditions: anodal and sham. In all patients, language measures were collected before (T0), at the end (T5) and one week after the end of treatment (F/U). Results showed that only after anodal tsDCS patients exhibited a better accuracy in repeating the treated items. Moreover, these effects persisted at F/U and generalized to other oral language tasks (i.e. picture description, noun and verb naming, word repetition and reading). A further analysis, which compared the tsDCS results with those collected in a matched group of patients who underwent the same language treatment but combined with tDCS, revealed no differences between the two groups. Given the persistency and severity of articulatory deficits in aphasia and the ease of use of tsDCS, we believe that spinal stimulation might result a new innovative approach for language rehabilitation.
Collapse
Affiliation(s)
- Francesca Pisano
- Department of Humanities studies - University Federico II, Naples, Italy
| | | | | | - Paola Marangolo
- Department of Humanities studies - University Federico II, Naples, Italy; IRCCS Santa Lucia Foundation, Rome, Italy.
| |
Collapse
|
31
|
Kröger BJ, Stille CM, Blouw P, Bekolay T, Stewart TC. Hierarchical Sequencing and Feedforward and Feedback Control Mechanisms in Speech Production: A Preliminary Approach for Modeling Normal and Disordered Speech. Front Comput Neurosci 2020; 14:573554. [PMID: 33262697 PMCID: PMC7686541 DOI: 10.3389/fncom.2020.573554] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 10/05/2020] [Indexed: 12/02/2022] Open
Abstract
Our understanding of the neurofunctional mechanisms of speech production and their pathologies is still incomplete. In this paper, a comprehensive model of speech production based on the Neural Engineering Framework (NEF) is presented. This model is able to activate sensorimotor plans based on cognitive-functional processes (i.e., generation of the intention of an utterance, selection of words and syntactic frames, generation of the phonological form and motor plan; feedforward mechanism). Since the generation of different states of the utterance are tied to different levels in the speech production hierarchy, it is shown that different forms of speech errors as well as speech disorders can arise at different levels in the production hierarchy or are linked to different levels and different modules in the speech production model. In addition, the influence of the inner feedback mechanisms on normal as well as on disordered speech is examined in terms of the model. The model uses a small number of core concepts provided by the NEF, and we show that these are sufficient to create this neurobiologically detailed model of the complex process of speech production in a manner that is, we believe, clear, efficient, and understandable.
Collapse
Affiliation(s)
- Bernd J. Kröger
- Department for Phoniatrics, Pedaudiology and Communication Disorders, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Catharina Marie Stille
- Department for Phoniatrics, Pedaudiology and Communication Disorders, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Trevor Bekolay
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Terrence C. Stewart
- National Research Council of Canada, University of Waterloo Collaboration Centre, Waterloo, ON, Canada
| |
Collapse
|
32
|
Brain activation during non-habitual speech production: Revisiting the effects of simulated disfluencies in fluent speakers. PLoS One 2020; 15:e0228452. [PMID: 32004353 PMCID: PMC6993970 DOI: 10.1371/journal.pone.0228452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 01/15/2020] [Indexed: 11/19/2022] Open
Abstract
Over the past decades, brain imaging studies in fluently speaking participants have greatly advanced our knowledge of the brain areas involved in speech production. In addition, complementary information has been provided by investigations of brain activation patterns associated with disordered speech. In the present study we specifically aimed to revisit and expand an earlier study by De Nil and colleagues, by investigating the effects of simulating disfluencies on the brain activation patterns of fluent speakers during overt and covert speech production. In contrast to the De Nil et al. study, the current findings show that the production of voluntary, self-generated disfluencies by fluent speakers resulted in increased recruitment and activation of brain areas involved in speech production. These areas show substantial overlap with the neural networks involved in motor sequence learning in general, and learning of speech production, in particular. The implications of these findings for the interpretation of brain imaging studies on disordered and non-habitual speech production are discussed.
Collapse
|
33
|
de Lima Xavier L, Hanekamp S, Simonyan K. Sexual Dimorphism Within Brain Regions Controlling Speech Production. Front Neurosci 2019; 13:795. [PMID: 31417351 PMCID: PMC6682624 DOI: 10.3389/fnins.2019.00795] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 07/16/2019] [Indexed: 11/25/2022] Open
Abstract
Neural processing of speech production has been traditionally attributed to the left hemisphere. However, it remains unclear if there are structural bases for speech functional lateralization and if these may be partially explained by sexual dimorphism of cortical morphology. We used a combination of high-resolution MRI and speech-production functional MRI to examine cortical thickness of brain regions involved in speech control in healthy males and females. We identified greater cortical thickness of the left Heschl's gyrus in females compared to males. Additionally, rightward asymmetry of the supramarginal gyrus and leftward asymmetry of the precentral gyrus were found within both male and female groups. Sexual dimorphism of the Heschl's gyrus may underlie known differences in auditory processing for speech production between males and females, whereas findings of asymmetries within cortical areas involved in speech motor execution and planning may contribute to the hemispheric localization of functional activity and connectivity of these regions within the speech production network. Our findings highlight the importance of consideration of sex as a biological variable in studies on neural correlates of speech control.
Collapse
Affiliation(s)
- Laura de Lima Xavier
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Sandra Hanekamp
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Kristina Simonyan
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|