1
|
Vitória MA, Fernandes FG, van den Boom M, Ramsey N, Raemaekers M. Decoding Single and Paired Phonemes Using 7T Functional MRI. Brain Topogr 2024; 37:731-747. [PMID: 38261272 PMCID: PMC11393141 DOI: 10.1007/s10548-024-01034-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.
Collapse
Affiliation(s)
- Maria Araújo Vitória
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Francisco Guerreiro Fernandes
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max van den Boom
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | - Nick Ramsey
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Mathijs Raemaekers
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
2
|
Wyse-Sookoo K, Luo S, Candrea D, Schippers A, Tippett DC, Wester B, Fifer M, Vansteensel MJ, Ramsey NF, Crone NE. Stability of ECoG high gamma signals during speech and implications for a speech BCI system in an individual with ALS: a year-long longitudinal study. J Neural Eng 2024; 21:10.1088/1741-2552/ad5c02. [PMID: 38925110 PMCID: PMC11245360 DOI: 10.1088/1741-2552/ad5c02] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 06/26/2024] [Indexed: 06/28/2024]
Abstract
Objective.Speech brain-computer interfaces (BCIs) have the potential to augment communication in individuals with impaired speech due to muscle weakness, for example in amyotrophic lateral sclerosis (ALS) and other neurological disorders. However, to achieve long-term, reliable use of a speech BCI, it is essential for speech-related neural signal changes to be stable over long periods of time. Here we study, for the first time, the stability of speech-related electrocorticographic (ECoG) signals recorded from a chronically implanted ECoG BCI over a 12 month period.Approach.ECoG signals were recorded by an ECoG array implanted over the ventral sensorimotor cortex in a clinical trial participant with ALS. Because ECoG-based speech decoding has most often relied on broadband high gamma (HG) signal changes relative to baseline (non-speech) conditions, we studied longitudinal changes of HG band power at baseline and during speech, and we compared these with residual high frequency noise levels at baseline. Stability was further assessed by longitudinal measurements of signal-to-noise ratio, activation ratio, and peak speech-related HG response magnitude (HG response peaks). Lastly, we analyzed the stability of the event-related HG power changes (HG responses) for individual syllables at each electrode.Main Results.We found that speech-related ECoG signal responses were stable over a range of syllables activating different articulators for the first year after implantation.Significance.Together, our results indicate that ECoG can be a stable recording modality for long-term speech BCI systems for those living with severe paralysis.Clinical Trial Information.ClinicalTrials.gov, registration number NCT03567213.
Collapse
Affiliation(s)
- Kimberley Wyse-Sookoo
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Shiyu Luo
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Daniel Candrea
- Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Anouck Schippers
- Department of Neurology and Neurosurgery, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Donna C Tippett
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
- Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| | - Brock Wester
- Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States of America
| | - Matthew Fifer
- Research and Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States of America
| | - Mariska J Vansteensel
- Department of Neurology and Neurosurgery, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nick F Ramsey
- Department of Neurology and Neurosurgery, Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, United States of America
| |
Collapse
|
3
|
Kent RD. The Feel of Speech: Multisystem and Polymodal Somatosensation in Speech Production. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1424-1460. [PMID: 38593006 DOI: 10.1044/2024_jslhr-23-00575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE The oral structures such as the tongue and lips have remarkable somatosensory capacities, but understanding the roles of somatosensation in speech production requires a more comprehensive knowledge of somatosensation in the speech production system in its entirety, including the respiratory, laryngeal, and supralaryngeal subsystems. This review was conducted to summarize the system-wide somatosensory information available for speech production. METHOD The search was conducted with PubMed/Medline and Google Scholar for articles published until November 2023. Numerous search terms were used in conducting the review, which covered the topics of psychophysics, basic and clinical behavioral research, neuroanatomy, and neuroscience. RESULTS AND CONCLUSIONS The current understanding of speech somatosensation rests primarily on the two pillars of psychophysics and neuroscience. The confluence of polymodal afferent streams supports the development, maintenance, and refinement of speech production. Receptors are both canonical and noncanonical, with the latter occurring especially in the muscles innervated by the facial nerve. Somatosensory representation in the cortex is disproportionately large and provides for sensory interactions. Speech somatosensory function is robust over the lifespan, with possible declines in advanced aging. The understanding of somatosensation in speech disorders is largely disconnected from research and theory on speech production. A speech somatoscape is proposed as the generalized, system-wide sensation of speech production, with implications for speech development, speech motor control, and speech disorders.
Collapse
|
4
|
Guerreiro Fernandes F, Raemaekers M, Freudenburg Z, Ramsey N. Considerations for implanting speech brain computer interfaces based on functional magnetic resonance imaging. J Neural Eng 2024; 21:036005. [PMID: 38648782 DOI: 10.1088/1741-2552/ad4178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 04/22/2024] [Indexed: 04/25/2024]
Abstract
Objective.Brain-computer interfaces (BCIs) have the potential to reinstate lost communication faculties. Results from speech decoding studies indicate that a usable speech BCI based on activity in the sensorimotor cortex (SMC) can be achieved using subdurally implanted electrodes. However, the optimal characteristics for a successful speech implant are largely unknown. We address this topic in a high field blood oxygenation level dependent functional magnetic resonance imaging (fMRI) study, by assessing the decodability of spoken words as a function of hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal-axis.Approach.Twelve subjects conducted a 7T fMRI experiment in which they pronounced 6 different pseudo-words over 6 runs. We divided the SMC by hemisphere, gyrus, sulcal depth, and position along the ventral/dorsal axis. Classification was performed on in these SMC areas using multiclass support vector machine (SVM).Main results.Significant classification was possible from the SMC, but no preference for the left or right hemisphere, nor for the precentral or postcentral gyrus for optimal word classification was detected. Classification while using information from the cortical surface was slightly better than when using information from deep in the central sulcus and was highest within the ventral 50% of SMC. Confusion matrices where highly similar across the entire SMC. An SVM-searchlight analysis revealed significant classification in the superior temporal gyrus and left planum temporale in addition to the SMC.Significance.The current results support a unilateral implant using surface electrodes, covering the ventral 50% of the SMC. The added value of depth electrodes is unclear. We did not observe evidence for variations in the qualitative nature of information across SMC. The current results need to be confirmed in paralyzed patients performing attempted speech.
Collapse
Affiliation(s)
- F Guerreiro Fernandes
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - M Raemaekers
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - Z Freudenburg
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| | - N Ramsey
- Department of Neurology and Neurosurgery, University Medical Center Utrecht Brain Center, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
5
|
Cordeau M, Bichoutar I, Meunier D, Loh KK, Michaud I, Coulon O, Auzias G, Belin P. Anatomo-functional correspondence in the voice-selective regions of human prefrontal cortex. Neuroimage 2023; 279:120336. [PMID: 37597590 DOI: 10.1016/j.neuroimage.2023.120336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 06/20/2023] [Accepted: 08/16/2023] [Indexed: 08/21/2023] Open
Abstract
Group level analyses of functional regions involved in voice perception show evidence of 3 sets of bilateral voice-sensitive activations in the human prefrontal cortex, named the anterior, middle and posterior Frontal Voice Areas (FVAs). However, the relationship with the underlying sulcal anatomy, highly variable in this region, is still unknown. We examined the inter-individual variability of the FVAs in conjunction with the sulcal anatomy. To do so, anatomical and functional MRI scans from 74 subjects were analyzed to generate individual contrast maps of the FVAs and relate them to each subject's manually labeled prefrontal sulci. We report two major results. First, the frontal activations for the voice are significantly associated with the sulcal anatomy. Second, this correspondence with the sulcal anatomy at the individual level is a better predictor than coordinates in the MNI space. These findings offer new perspectives for the understanding of anatomical-functional correspondences in this complex cortical region. They also shed light on the importance of considering individual-specific variations in subject's anatomy.
Collapse
Affiliation(s)
- Mélina Cordeau
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France.
| | - Ihsane Bichoutar
- Institute of Neuroscience and Medicine (INM-1), Forschungszentrum Jülich, Jülich, Germany
| | - David Meunier
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France
| | - Kep-Kee Loh
- Montreal Neurological Institute, McGill University, Montreal, Quebec, Canada; Department of Psychology, National University of Singapore, Singapore
| | - Isaure Michaud
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France
| | - Olivier Coulon
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France; Institute of Language Communication and the Brain, ILCB, Aix-en-Provence, France
| | - Guillaume Auzias
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France
| | - Pascal Belin
- Institut de Neurosciences de la Timone, Aix Marseille Université, UMR 7289 CNRS, Marseille 13005, France; Psychology Department, Montreal University, C.P. 6128, succ. Centre-ville, Montreal, Quebec H3C 3J7, Canada; Institute of Language Communication and the Brain, ILCB, Aix-en-Provence, France
| |
Collapse
|
6
|
Thomas TM, Singh A, Bullock LP, Liang D, Morse CW, Scherschligt X, Seymour JP, Tandon N. Decoding articulatory and phonetic components of naturalistic continuous speech from the distributed language network. J Neural Eng 2023; 20:046030. [PMID: 37487487 DOI: 10.1088/1741-2552/ace9fb] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 07/24/2023] [Indexed: 07/26/2023]
Abstract
Objective.The speech production network relies on a widely distributed brain network. However, research and development of speech brain-computer interfaces (speech-BCIs) has typically focused on decoding speech only from superficial subregions readily accessible by subdural grid arrays-typically placed over the sensorimotor cortex. Alternatively, the technique of stereo-electroencephalography (sEEG) enables access to distributed brain regions using multiple depth electrodes with lower surgical risks, especially in patients with brain injuries resulting in aphasia and other speech disorders.Approach.To investigate the decoding potential of widespread electrode coverage in multiple cortical sites, we used a naturalistic continuous speech production task. We obtained neural recordings using sEEG from eight participants while they read aloud sentences. We trained linear classifiers to decode distinct speech components (articulatory components and phonemes) solely based on broadband gamma activity and evaluated the decoding performance using nested five-fold cross-validation.Main Results.We achieved an average classification accuracy of 18.7% across 9 places of articulation (e.g. bilabials, palatals), 26.5% across 5 manner of articulation (MOA) labels (e.g. affricates, fricatives), and 4.81% across 38 phonemes. The highest classification accuracies achieved with a single large dataset were 26.3% for place of articulation, 35.7% for MOA, and 9.88% for phonemes. Electrodes that contributed high decoding power were distributed across multiple sulcal and gyral sites in both dominant and non-dominant hemispheres, including ventral sensorimotor, inferior frontal, superior temporal, and fusiform cortices. Rather than finding a distinct cortical locus for each speech component, we observed neural correlates of both articulatory and phonetic components in multiple hubs of a widespread language production network.Significance.These results reveal the distributed cortical representations whose activity can enable decoding speech components during continuous speech through the use of this minimally invasive recording method, elucidating language neurobiology and neural targets for future speech-BCIs.
Collapse
Affiliation(s)
- Tessy M Thomas
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Aditya Singh
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Latané P Bullock
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Daniel Liang
- Department of Computer Science, Rice University, Houston, TX 77005, United States of America
| | - Cale W Morse
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Xavier Scherschligt
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - John P Seymour
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Department of Electrical & Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX 77030, United States of America
| |
Collapse
|
7
|
McCall JD, DeMarco AT, Mandal AS, Fama ME, van der Stelt CM, Lacey EH, Laks AB, Snider SF, Friedman RB, Turkeltaub PE. Listening to Yourself and Watching Your Tongue: Distinct Abilities and Brain Regions for Monitoring Semantic and Phonological Speech Errors. J Cogn Neurosci 2023; 35:1169-1194. [PMID: 37159232 PMCID: PMC10273223 DOI: 10.1162/jocn_a_02000] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Despite the many mistakes we make while speaking, people can effectively communicate because we monitor our speech errors. However, the cognitive abilities and brain structures that support speech error monitoring are unclear. There may be different abilities and brain regions that support monitoring phonological speech errors versus monitoring semantic speech errors. We investigated speech, language, and cognitive control abilities that relate to detecting phonological and semantic speech errors in 41 individuals with aphasia who underwent detailed cognitive testing. Then, we used support vector regression lesion symptom mapping to identify brain regions supporting detection of phonological versus semantic errors in a group of 76 individuals with aphasia. The results revealed that motor speech deficits as well as lesions to the ventral motor cortex were related to reduced detection of phonological errors relative to semantic errors. Detection of semantic errors selectively related to auditory word comprehension deficits. Across all error types, poor cognitive control related to reduced detection. We conclude that monitoring of phonological and semantic errors relies on distinct cognitive abilities and brain regions. Furthermore, we identified cognitive control as a shared cognitive basis for monitoring all types of speech errors. These findings refine and expand our understanding of the neurocognitive basis of speech error monitoring.
Collapse
Affiliation(s)
- Joshua D McCall
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
| | - Andrew T DeMarco
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC
| | - Ayan S Mandal
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Brain-Gene Development Lab, Psychiatry Department, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA
| | - Mackenzie E Fama
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Department of Speech, Language, and Hearing Sciences, The George Washington University, Washington, DC
| | - Candace M van der Stelt
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
| | - Elizabeth H Lacey
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
| | - Alycia B Laks
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
| | - Sarah F Snider
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| | - Rhonda B Friedman
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Neurology Department, Georgetown University Medical Center, Washington, DC
- Rehabilitation Medicine Department, Georgetown University Medical Center, Washington, DC
- Research Division, MedStar National Rehabilitation Hospital, Washington, DC
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center, Washington, DC
| |
Collapse
|
8
|
Andrews JP, Cahn N, Speidel BA, Chung JE, Levy DF, Wilson SM, Berger MS, Chang EF. Dissociation of Broca's area from Broca's aphasia in patients undergoing neurosurgical resections. J Neurosurg 2023; 138:847-857. [PMID: 35932264 PMCID: PMC9899289 DOI: 10.3171/2022.6.jns2297] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Accepted: 06/15/2022] [Indexed: 02/07/2023]
Abstract
OBJECTIVE Broca's aphasia is a syndrome of impaired fluency with retained comprehension. The authors used an unbiased algorithm to examine which neuroanatomical areas are most likely to result in Broca's aphasia following surgical lesions. METHODS Patients were prospectively evaluated with standardized language batteries before and after surgery. Broca's area was defined anatomically as the pars opercularis and triangularis of the inferior frontal gyrus. Broca's aphasia was defined by the Western Aphasia Battery language assessment. Resections were outlined from MRI scans to construct 3D volumes of interest. These were aligned using a nonlinear transformation to Montreal Neurological Institute brain space. A voxel-based lesion-symptom mapping (VLSM) algorithm was used to test for areas statistically associated with Broca's aphasia when incorporated into a resection, as well as areas associated with deficits in fluency independent of Western Aphasia Battery classification. Postoperative MRI scans were reviewed in blinded fashion to estimate the percentage resection of Broca's area compared to areas identified using the VLSM algorithm. RESULTS A total of 289 patients had early language evaluations, of whom 19 had postoperative Broca's aphasia. VLSM analysis revealed an area that was highly correlated (p < 0.001) with Broca's aphasia, spanning ventral sensorimotor cortex and supramarginal gyri, as well as extending into subcortical white matter tracts. Reduced fluency scores were significantly associated with an overlapping region of interest. The fluency score was negatively correlated with fraction of resected precentral, postcentral, and supramarginal components of the VLSM area. CONCLUSIONS Broca's aphasia does not typically arise from neurosurgical resections in Broca's area. When Broca's aphasia does occur after surgery, it is typically in the early postoperative period, improves by 1 month, and is associated with resections of ventral sensorimotor cortex and supramarginal gyri.
Collapse
Affiliation(s)
- John P. Andrews
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Nathan Cahn
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Benjamin A. Speidel
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Jason E. Chung
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Deborah F. Levy
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Stephen M. Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Mitchel S. Berger
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, School of Medicine, San Francisco, California; and
| |
Collapse
|
9
|
Weiss AR, Korzeniewska A, Chrabaszcz A, Bush A, Fiez JA, Crone NE, Richardson RM. Lexicality-Modulated Influence of Auditory Cortex on Subthalamic Nucleus During Motor Planning for Speech. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:53-80. [PMID: 37229140 PMCID: PMC10205077 DOI: 10.1162/nol_a_00086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/09/2022] [Accepted: 10/18/2022] [Indexed: 05/27/2023]
Abstract
Speech requires successful information transfer within cortical-basal ganglia loop circuits to produce the desired acoustic output. For this reason, up to 90% of Parkinson's disease patients experience impairments of speech articulation. Deep brain stimulation (DBS) is highly effective in controlling the symptoms of Parkinson's disease, sometimes alongside speech improvement, but subthalamic nucleus (STN) DBS can also lead to decreases in semantic and phonological fluency. This paradox demands better understanding of the interactions between the cortical speech network and the STN, which can be investigated with intracranial EEG recordings collected during DBS implantation surgery. We analyzed the propagation of high-gamma activity between STN, superior temporal gyrus (STG), and ventral sensorimotor cortices during reading aloud via event-related causality, a method that estimates strengths and directionalities of neural activity propagation. We employed a newly developed bivariate smoothing model based on a two-dimensional moving average, which is optimal for reducing random noise while retaining a sharp step response, to ensure precise embedding of statistical significance in the time-frequency space. Sustained and reciprocal neural interactions between STN and ventral sensorimotor cortex were observed. Moreover, high-gamma activity propagated from the STG to the STN prior to speech onset. The strength of this influence was affected by the lexical status of the utterance, with increased activity propagation during word versus pseudoword reading. These unique data suggest a potential role for the STN in the feedforward control of speech.
Collapse
Affiliation(s)
- Alexander R. Weiss
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Anna Korzeniewska
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Anna Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Alan Bush
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Julie A. Fiez
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
- University of Pittsburgh Brain Institute, Pittsburgh, PA, USA
| | - Nathan E. Crone
- JHU Cognitive Neurophysiology and BMI Lab, Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Robert M. Richardson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, USA
- Harvard Medical School, Boston, MA, USA
| |
Collapse
|
10
|
Ziegler W, Schölderle T, Brendel B, Risch V, Felber S, Ott K, Goldenberg G, Vogel M, Bötzel K, Zettl L, Lorenzl S, Lampe R, Strecker K, Synofzik M, Lindig T, Ackermann H, Staiger A. Speech and Nonspeech Parameters in the Clinical Assessment of Dysarthria: A Dimensional Analysis. Brain Sci 2023; 13:brainsci13010113. [PMID: 36672094 PMCID: PMC9856358 DOI: 10.3390/brainsci13010113] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2022] [Revised: 01/03/2023] [Accepted: 01/05/2023] [Indexed: 01/11/2023] Open
Abstract
Nonspeech (or paraspeech) parameters are widely used in clinical assessment of speech impairment in persons with dysarthria (PWD). Virtually every standard clinical instrument used in dysarthria diagnostics includes nonspeech parameters, often in considerable numbers. While theoretical considerations have challenged the validity of these measures as markers of speech impairment, only a few studies have directly examined their relationship to speech parameters on a broader scale. This study was designed to investigate how nonspeech parameters commonly used in clinical dysarthria assessment relate to speech characteristics of dysarthria in individuals with movement disorders. Maximum syllable repetition rates, accuracies, and rates of isolated and repetitive nonspeech oral-facial movements and maximum phonation times were compared with auditory-perceptual and acoustic speech parameters. Overall, 23 diagnostic parameters were assessed in a sample of 130 patients with movement disorders of six etiologies. Each variable was standardized for its distribution and for age and sex effects in 130 neurotypical speakers. Exploratory Graph Analysis (EGA) and Confirmatory Factor Analysis (CFA) were used to examine the factor structure underlying the diagnostic parameters. In the first analysis, we tested the hypothesis that nonspeech parameters combine with speech parameters within diagnostic dimensions representing domain-general motor control principles. In a second analysis, we tested the more specific hypotheses that diagnostic parameters split along effector (lip vs. tongue) or functional (speed vs. accuracy) rather than task boundaries. Our findings contradict the view that nonspeech parameters currently used in dysarthria diagnostics are congruent with diagnostic measures of speech characteristics in PWD.
Collapse
Affiliation(s)
- Wolfram Ziegler
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany
- Correspondence:
| | - Theresa Schölderle
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany
| | - Bettina Brendel
- Clinic for Psychiatry and Psychotherapy, Neurophysiology & Interventional Neuropsychiatry, University of Tübingen, 72076 Tübingen, Germany
| | - Verena Risch
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany
| | - Stefanie Felber
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany
| | - Katharina Ott
- Department of Neurology, Klinikum Großhadern, Ludwig-Maximilians-University, 81377 Munich, Germany
| | - Georg Goldenberg
- Clinic for Neuropsychology, City Hospital Munich Bogenhausen, 81925 Munich, Germany
| | - Mathias Vogel
- Clinic for Neuropsychology, City Hospital Munich Bogenhausen, 81925 Munich, Germany
| | - Kai Bötzel
- Department of Neurology, Klinikum Großhadern, Ludwig-Maximilians-University, 81377 Munich, Germany
| | - Lena Zettl
- Medical Clinic and Outpatient Clinic IV, Ludwig-Maximilians-University, 81377 Munich, Germany
| | - Stefan Lorenzl
- Clinic for Neurology, Hospital Agatharied, 83734 Hausham, Germany
| | - Renée Lampe
- School of Medicine, Klinikum Rechts der Isar, Orthopedic Department, Research Unit for Pediatric Neuroorthopedics and Cerebral Palsy of the Buhl-Strohmaier Foundation, Technical University of Munich, 81675 Munich, Germany
| | - Katrin Strecker
- Department of Logopedics, Stiftung ICP Munich, Center for Cerebral Palsy, 81377 Munich, Germany
| | - Matthis Synofzik
- Department of Neurodegenerative Disease, Hertie-Institute for Clinical Brain Research, German Center for Neurodegenerative Diseases (DZNE), and Center for Neurology, University of Tübingen, 72076 Tübingen, Germany
| | - Tobias Lindig
- Department of Diagnostic and Interventional Neuroradiology, University Hospital Tübingen, 72076 Tübingen, Germany
| | - Hermann Ackermann
- Department of General Neurology, Hertie-Institute for Clinical Brain Research, University of Tübingen, 72076 Tübingen, Germany
| | - Anja Staiger
- Clinical Neuropsychology Research Group (EKN), Institute of Phonetics and Speech Processing, Ludwig-Maximilians-University, 80799 Munich, Germany
| |
Collapse
|
11
|
Ganesh A, Cervantes AJ, Kennedy PR. Slow Firing Single Units Are Essential for Optimal Decoding of Silent Speech. Front Hum Neurosci 2022; 16:874199. [PMID: 35992944 PMCID: PMC9382878 DOI: 10.3389/fnhum.2022.874199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 06/14/2022] [Indexed: 11/18/2022] Open
Abstract
The motivation of someone who is locked-in, that is, paralyzed and mute, is to find relief for their loss of function. The data presented in this report is part of an attempt to restore one of those lost functions, namely, speech. An essential feature of the development of a speech prosthesis is optimal decoding of patterns of recorded neural signals during silent or covert speech, that is, speaking “inside the head” with output that is inaudible due to the paralysis of the articulators. The aim of this paper is to illustrate the importance of both fast and slow single unit firings recorded from an individual with locked-in syndrome and from an intact participant speaking silently. Long duration electrodes were implanted in the motor speech cortex for up to 13 years in the locked-in participant. The data herein provide evidence that slow firing single units are essential for optimal decoding accuracy. Additional evidence indicates that slow firing single units can be conditioned in the locked-in participant 5 years after implantation, further supporting their role in decoding.
Collapse
Affiliation(s)
- Ananya Ganesh
- Neural Signals Inc., Neural Prostheses Laboratory, Duluth, GA, United States
| | | | - Philip R. Kennedy
- Neural Signals Inc., Neural Prostheses Laboratory, Duluth, GA, United States
- *Correspondence: Philip R. Kennedy
| |
Collapse
|
12
|
Cao Y, Oostenveld R, Alday PM, Piai V. Are alpha and beta oscillations spatially dissociated over the cortex in context-driven spoken-word production? Psychophysiology 2022; 59:e13999. [PMID: 35066874 PMCID: PMC9285923 DOI: 10.1111/psyp.13999] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 10/19/2021] [Accepted: 12/06/2021] [Indexed: 11/28/2022]
Abstract
Decreases in oscillatory alpha‐ and beta‐band power have been consistently found in spoken‐word production. These have been linked to both motor preparation and conceptual‐lexical retrieval processes. However, the observed power decreases have a broad frequency range that spans two “classic” (sensorimotor) bands: alpha and beta. It remains unclear whether alpha‐ and beta‐band power decreases contribute independently when a spoken word is planned. Using a re‐analysis of existing magnetoencephalography data, we probed whether the effects in alpha and beta bands are spatially distinct. Participants read a sentence that was either constraining or non‐constraining toward the final word, which was presented as a picture. In separate blocks participants had to name the picture or score its predictability via button press. Irregular‐resampling auto‐spectral analysis (IRASA) was used to isolate the oscillatory activity in the alpha and beta bands from the background 1‐over‐f spectrum. The sources of alpha‐ and beta‐band oscillations were localized based on the participants’ individualized peak frequencies. For both tasks, alpha‐ and beta‐power decreases overlapped in left posterior temporal and inferior parietal cortex, regions that have previously been associated with conceptual and lexical processes. The spatial distributions of the alpha and beta power effects were spatially similar in these regions to the extent we could assess it. By contrast, for left frontal regions, the spatial distributions differed between alpha and beta effects. Our results suggest that for conceptual‐lexical retrieval, alpha and beta oscillations do not dissociate spatially and, thus, are distinct from the classical sensorimotor alpha and beta oscillations. It remains unclear whether the consistently found alpha‐ and beta‐band power decreases in spoken‐word production support a single operation or contribute independently. Using novel methodology, we probed whether the alpha and beta bands are distinct from an anatomical perspective. We found anatomical overlap in the left posterior temporal and inferior parietal cortex, whereas for the left frontal region, the spatial overlap was limited. Our results suggest that for conceptual‐lexical retrieval, alpha and beta oscillations do not dissociate and, thus, are distinct from the classical sensorimotor alpha and beta.
Collapse
Affiliation(s)
- Yang Cao
- Donders Centre for Cognition, Radboud University, Nijmegen, The Netherlands
| | - Robert Oostenveld
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands.,NatMEG, Karolinska Institutet, Stockholm, Sweden
| | - Phillip M Alday
- Max-Planck-Institute for Psycholinguistics, Nijmegen, The Netherlands
| | - Vitória Piai
- Donders Centre for Cognition, Radboud University, Nijmegen, The Netherlands.,Donders Centre for Medical Neuroscience, Department of Medical Psychology, Radboud University Medical Center, Nijmegen, The Netherlands
| |
Collapse
|
13
|
Dastolfo-Hromack C, Bush A, Chrabaszcz A, Alhourani A, Lipski W, Wang D, Crammond DJ, Shaiman S, Dickey MW, Holt LL, Turner RS, Fiez JA, Richardson RM. Articulatory Gain Predicts Motor Cortex and Subthalamic Nucleus Activity During Speech. Cereb Cortex 2021; 32:1337-1349. [PMID: 34470045 DOI: 10.1093/cercor/bhab251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 06/14/2021] [Accepted: 06/18/2021] [Indexed: 11/12/2022] Open
Abstract
Speaking precisely is important for effective verbal communication, and articulatory gain is one component of speech motor control that contributes to achieving this goal. Given that the basal ganglia have been proposed to regulate the speed and size of limb movement, that is, movement gain, we explored the basal ganglia contribution to articulatory gain, through local field potentials (LFP) recorded simultaneously from the subthalamic nucleus (STN), precentral gyrus, and postcentral gyrus. During STN deep brain stimulation implantation for Parkinson's disease, participants read aloud consonant-vowel-consonant syllables. Articulatory gain was indirectly assessed using the F2 Ratio, an acoustic measurement of the second formant frequency of/i/vowels divided by/u/vowels. Mixed effects models demonstrated that the F2 Ratio correlated with alpha and theta activity in the precentral gyrus and STN. No correlations were observed for the postcentral gyrus. Functional connectivity analysis revealed that higher phase locking values for beta activity between the STN and precentral gyrus were correlated with lower F2 Ratios, suggesting that higher beta synchrony impairs articulatory precision. Effects were not related to disease severity. These data suggest that articulatory gain is encoded within the basal ganglia-cortical loop.
Collapse
Affiliation(s)
- C Dastolfo-Hromack
- Department of Communication Science and Disorders, University of Pittsburgh School of Health and Rehabilitation Sciences, Pittsburgh, PA 15260, USA
| | - A Bush
- Department of Neurological Surgery, Massachusetts General Hospital, MA 02114, USA.,Harvard Medical School, Boston, MA 02115, USA
| | - A Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - A Alhourani
- Department of Neurosurgery, University of Louisville, Louisville, KY 40292, USA
| | - W Lipski
- Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - D Wang
- School of Medicine, Tsinghua University, Beijing 100084, China
| | - D J Crammond
- Department of Neurological Surgery, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - S Shaiman
- Department of Communication Science and Disorders, University of Pittsburgh School of Health and Rehabilitation Sciences, Pittsburgh, PA 15260, USA
| | - M W Dickey
- Department of Communication Science and Disorders, University of Pittsburgh School of Health and Rehabilitation Sciences, Pittsburgh, PA 15260, USA
| | - L L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA 15213, USA
| | - R S Turner
- Department of Neurobiology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213, USA
| | - J A Fiez
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - R M Richardson
- Department of Neurological Surgery, Massachusetts General Hospital, MA 02114, USA.,Harvard Medical School, Boston, MA 02115, USA
| |
Collapse
|
14
|
Choe HN, Jarvis ED. The role of sex chromosomes and sex hormones in vocal learning systems. Horm Behav 2021; 132:104978. [PMID: 33895570 DOI: 10.1016/j.yhbeh.2021.104978] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 03/22/2021] [Accepted: 03/23/2021] [Indexed: 12/12/2022]
Abstract
Vocal learning is the ability to imitate and modify sounds through auditory experience, a rare trait found in only a few lineages of mammals and birds. It is a critical component of human spoken language, allowing us to verbally transmit speech repertoires and knowledge across generations. In many vocal learning species, the vocal learning trait is sexually dimorphic, where it is either limited to males or present in both sexes to different degrees. In humans, recent findings have revealed subtle sexual dimorphism in vocal learning/spoken language brain regions and some associated disorders. For songbirds, where the neural mechanisms of vocal learning have been well studied, vocal learning appears to have been present in both sexes at the origin of the lineage and was then independently lost in females of some subsequent lineages. This loss is associated with an interplay between sex chromosomes and sex steroid hormones. Even in species with little dimorphism, like humans, sex chromosomes and hormones still have some influence on learned vocalizations. Here we present a brief synthesis of these studies, in the context of sex determination broadly, and identify areas of needed investigation to further understand how sex chromosomes and sex steroid hormones help establish sexually dimorphic neural structures for vocal learning.
Collapse
Affiliation(s)
- Ha Na Choe
- Duke University Medical Center, The Rockefeller University, Howard Hughes Medical Institute, United States of America.
| | - Erich D Jarvis
- Duke University Medical Center, The Rockefeller University, Howard Hughes Medical Institute, United States of America.
| |
Collapse
|
15
|
Ogino Y, Kawamichi H, Takizawa D, Sugawara SK, Hamano YH, Fukunaga M, Toyoda K, Watanabe Y, Abe O, Sadato N, Saito S, Furui S. Enhanced structural connectivity within the motor loop in professional boxers prior to a match. Sci Rep 2021; 11:9015. [PMID: 33907206 PMCID: PMC8079439 DOI: 10.1038/s41598-021-88368-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 04/12/2021] [Indexed: 02/01/2023] Open
Abstract
Professional boxers train to reduce their body mass before a match to refine their body movements. To test the hypothesis that the well-defined movements of boxers are represented within the motor loop (cortico-striatal circuit), we first elucidated the brain structure and functional connectivity specific to boxers and then investigated plasticity in relation to boxing matches. We recruited 21 male boxers 1 month before a match (Time1) and compared them to 22 age-, sex-, and body mass index (BMI)-matched controls. Boxers were longitudinally followed up within 1 week prior to the match (Time2) and 1 month after the match (Time3). The BMIs of boxers significantly decreased at Time2 compared with those at Time1 and Time3. Compared to controls, boxers presented significantly higher gray matter volume in the left putamen, a critical region representing motor skill training. Boxers presented significantly higher functional connectivity than controls between the left primary motor cortex (M1) and left putamen, which is an essential region for establishing well-defined movements. Boxers also showed significantly higher structural connectivity in the same region within the motor loop from Time1 to Time2 than during other periods, which may represent the refined movements of their body induced by training for the match.
Collapse
Affiliation(s)
- Yuichi Ogino
- Department of Anesthesiology, Gunma University Graduate School of Medicine, 3-39-15 Maebashi, Gunma, 371-8510, Japan.
| | - Hiroaki Kawamichi
- Department of Anesthesiology, Gunma University Graduate School of Medicine, 3-39-15 Maebashi, Gunma, 371-8510, Japan
| | - Daisuke Takizawa
- Department of Anesthesiology, Japanese Red Cross Medical Center, 1-22 Hiroo, Shibuya-ku, Tokyo, 150-8935, Japan
| | - Sho K Sugawara
- Neural Prosthesis Project, Tokyo Metropolitan Institute of Medical Science, 2-1-6 Kamikitazawa, Setagaya-ku, Tokyo, 156-8506, Japan
| | - Yuki H Hamano
- Division of Cerebral Integration, Department of System Neuroscience, National Institute for Physiological Sciences, 38 Nishigonaka, Myodaiji, Okazaki, Aichi, 444-8585, Japan
| | - Masaki Fukunaga
- Division of Cerebral Integration, Department of System Neuroscience, National Institute for Physiological Sciences, 38 Nishigonaka, Myodaiji, Okazaki, Aichi, 444-8585, Japan
| | - Keiko Toyoda
- Department of Radiology, The Jikei University School of Medicine, 3-28-8 Nishi-Shimbashi, Minato-Ku, Tokyo, 105-864, Japan
| | - Yusuke Watanabe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Osamu Abe
- Department of Radiology, Graduate School of Medicine, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan
| | - Norihiro Sadato
- Division of Cerebral Integration, Department of System Neuroscience, National Institute for Physiological Sciences, 38 Nishigonaka, Myodaiji, Okazaki, Aichi, 444-8585, Japan
| | - Shigeru Saito
- Department of Anesthesiology, Gunma University Graduate School of Medicine, 3-39-15 Maebashi, Gunma, 371-8510, Japan
| | - Shigeru Furui
- Department of Radiology, Graduate School of Medicine, Teikyo University, 2-11-1 Kaga, Itabashi-ku, Tokyo, 173-8605, Japan
| |
Collapse
|
16
|
Schippers A, Vansteensel MJ, Freudenburg ZV, Leijten FSS, Ramsey NF. Detailed somatotopy of tongue movement in the human sensorimotor cortex: A case study. Brain Stimul 2021; 14:287-289. [PMID: 33482374 DOI: 10.1016/j.brs.2021.01.010] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 01/13/2021] [Accepted: 01/14/2021] [Indexed: 11/17/2022] Open
Affiliation(s)
- Anouck Schippers
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Mariska J Vansteensel
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Zachary V Freudenburg
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Frans S S Leijten
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, the Netherlands.
| |
Collapse
|
17
|
Eichert N, Papp D, Mars RB, Watkins KE. Mapping Human Laryngeal Motor Cortex during Vocalization. Cereb Cortex 2020; 30:6254-6269. [PMID: 32728706 PMCID: PMC7610685 DOI: 10.1093/cercor/bhaa182] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 06/01/2020] [Accepted: 06/06/2020] [Indexed: 01/17/2023] Open
Abstract
The representations of the articulators involved in human speech production are organized somatotopically in primary motor cortex. The neural representation of the larynx, however, remains debated. Both a dorsal and a ventral larynx representation have been previously described. It is unknown, however, whether both representations are located in primary motor cortex. Here, we mapped the motor representations of the human larynx using functional magnetic resonance imaging and characterized the cortical microstructure underlying the activated regions. We isolated brain activity related to laryngeal activity during vocalization while controlling for breathing. We also mapped the articulators (the lips and tongue) and the hand area. We found two separate activations during vocalization-a dorsal and a ventral larynx representation. Structural and quantitative neuroimaging revealed that myelin content and cortical thickness underlying the dorsal, but not the ventral larynx representation, are similar to those of other primary motor representations. This finding confirms that the dorsal larynx representation is located in primary motor cortex and that the ventral one is not. We further speculate that the location of the ventral larynx representation is in premotor cortex, as seen in other primates. It remains unclear, however, whether and how these two representations differentially contribute to laryngeal motor control.
Collapse
Affiliation(s)
- Nicole Eichert
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Daniel Papp
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
| | - Rogier B. Mars
- Centre for Functional MRI of the Brain (FMRIB), Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, John Radcliffe Hospital, University of Oxford, Oxford, UK
- Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, Nijmegen, the Netherlands
| | - Kate E. Watkins
- Department of Experimental Psychology, Wellcome Centre for Integrative Neuroimaging, University of Oxford, Oxford, UK
| |
Collapse
|
18
|
Gayoso S, Perez-Borreda P, Gutierrez A, García-Porrero JA, Marco de Lucas E, Martino J. Ventral Precentral Fiber Intersection Area: A Central Hub in the Connectivity of Perisylvian Associative Tracts. Oper Neurosurg (Hagerstown) 2020; 17:182-192. [PMID: 30418653 DOI: 10.1093/ons/opy331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2018] [Accepted: 09/27/2018] [Indexed: 11/12/2022] Open
Abstract
BACKGROUND The ventral part of the precentral gyrus is considered one of the most eloquent areas. However, little is known about the white matter organization underlying this functional hub. OBJECTIVE To analyze the subcortical anatomy underlying the ventral part of the precentral gyrus, ie, the ventral precentral fiber intersection area (VPFIA). METHODS Eight human hemispheres from cadavers were dissected, and 8 healthy hemispheres were studied with diffusion tensor imaging tractography. The tracts that terminate at the ventral part of the precentral gyrus were isolated. In addition, 6 surgical cases with left side gliomas close to the VPFIA were operated awake with intraoperative electrical stimulation mapping. RESULTS The connections within the VPFIA are anatomically organized along an anteroposterior axis: the pyramidal pathway terminates at the anterior bank of the precentral gyrus, the intermediate part is occupied by the long segment of the arcuate fasciculus, and the posterior bank is occupied by the anterior segment of the arcuate fasciculus. Stimulation of the VPFIA elicited speech arrest in all cases. CONCLUSION The present study shows strong arguments to sustain that the fiber organization of the VPFIA is different from the classical descriptions, bringing new light for understanding the functional role of this area in language. The VPFIA is a critical neural epicenter within the perisylvian network that may represent the final common network for speech production, as it is strategically located between the termination of the dorsal stream and the motor output cortex that directly control speech muscles.
Collapse
Affiliation(s)
- Sonia Gayoso
- Department of Neurological Surgery, Complexo Hospitalario Universitario A Coruña, As Xubias, La Coruña, Spain
| | | | | | - Juan A García-Porrero
- Department of Anatomy and Celular Biology, Cantabria University, Santander (Cantabria), Spain
| | - Enrique Marco de Lucas
- Department of Radiology, Hospital Universitario Marqués de Valdecilla and Fundación Instituto de Investigación Marqués de Valdecilla (IDIVAL), Santander (Cantabria), Spain
| | - Juan Martino
- Department of Neurological Surgery, Hospital Universitario Marqués de Valdecilla and Fundación Instituto de Investigación Marqués de Valdecilla (IDIVAL), Santander (Cantabria), Spain
| |
Collapse
|
19
|
Brown S, Yuan Y, Belyk M. Evolution of the speech-ready brain: The voice/jaw connection in the human motor cortex. J Comp Neurol 2020; 529:1018-1028. [PMID: 32720701 DOI: 10.1002/cne.24997] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 07/07/2020] [Accepted: 07/19/2020] [Indexed: 12/18/2022]
Abstract
A prominent model of the origins of speech, known as the "frame/content" theory, posits that oscillatory lowering and raising of the jaw provided an evolutionary scaffold for the development of syllable structure in speech. Because such oscillations are nonvocal in most nonhuman primates, the evolution of speech required the addition of vocalization onto this scaffold in order to turn such jaw oscillations into vocalized syllables. In the present functional MRI study, we demonstrate overlapping somatotopic representations between the larynx and the jaw muscles in the human primary motor cortex. This proximity between the larynx and jaw in the brain might support the coupling between vocalization and jaw oscillations to generate syllable structure. This model suggests that humans inherited voluntary control of jaw oscillations from ancestral species, but added voluntary control of vocalization onto this via the evolution of a new brain area that came to be situated near the jaw region in the human motor cortex.
Collapse
Affiliation(s)
- Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Ye Yuan
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Michel Belyk
- Department of Speech Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
20
|
Panico F, Ben-Romdhane M, Jacquesson T, Nash S, Cotton F, Luauté J. Could non-invasive brain stimulation help treat dysarthria? A single-case study. Ann Phys Rehabil Med 2020; 63:81-84. [DOI: 10.1016/j.rehab.2019.06.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2019] [Revised: 06/03/2019] [Accepted: 06/22/2019] [Indexed: 11/28/2022]
|
21
|
Abstract
Locked-in syndrome (LIS) is characterized by an inability to move or speak in the presence of intact cognition and can be caused by brainstem trauma or neuromuscular disease. Quality of life (QoL) in LIS is strongly impaired by the inability to communicate, which cannot always be remedied by traditional augmentative and alternative communication (AAC) solutions if residual muscle activity is insufficient to control the AAC device. Brain-computer interfaces (BCIs) may offer a solution by employing the person's neural signals instead of relying on muscle activity. Here, we review the latest communication BCI research using noninvasive signal acquisition approaches (electroencephalography, functional magnetic resonance imaging, functional near-infrared spectroscopy) and subdural and intracortical implanted electrodes, and we discuss current efforts to translate research knowledge into usable BCI-enabled communication solutions that aim to improve the QoL of individuals with LIS.
Collapse
|
22
|
Hesling I, Labache L, Joliot M, Tzourio-Mazoyer N. Large-scale plurimodal networks common to listening to, producing and reading word lists: an fMRI study combining task-induced activation and intrinsic connectivity in 144 right-handers. Brain Struct Funct 2019; 224:3075-3094. [PMID: 31494717 PMCID: PMC6875148 DOI: 10.1007/s00429-019-01951-4] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Accepted: 08/29/2019] [Indexed: 02/07/2023]
Abstract
We aimed at identifying plurimodal large-scale networks for producing, listening to and reading word lists based on the combined analyses of task-induced activation and resting-state intrinsic connectivity in 144 healthy right-handers. In the first step, we identified the regions in each hemisphere showing joint activation and joint asymmetry during the three tasks. In the left hemisphere, 14 homotopic regions of interest (hROIs) located in the left Rolandic sulcus, precentral gyrus, cingulate gyrus, cuneus and inferior supramarginal gyrus (SMG) met this criterion, and 7 hROIs located in the right hemisphere were located in the preSMA, medial superior frontal gyrus, precuneus and superior temporal sulcus (STS). In a second step, we calculated the BOLD temporal correlations across these 21 hROIs at rest and conducted a hierarchical clustering analysis to unravel their network organization. Two networks were identified, including the WORD-LIST_CORE network that aggregated 14 motor, premotor and phonemic areas in the left hemisphere plus the right STS that corresponded to the posterior human voice area (pHVA). The present results revealed that word-list processing is based on left articulatory and storage areas supporting the action-perception cycle common not only to production and listening but also to reading. The inclusion of the right pHVA acting as a prosodic integrative area highlights the importance of prosody in the three modalities and reveals an intertwining across hemispheres between prosodic (pHVA) and phonemic (left SMG) processing. These results are consistent with the motor theory of speech postulating that articulatory gestures are the central motor units on which word perception, production, and reading develop and act together.
Collapse
Affiliation(s)
- Isabelle Hesling
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France. .,CNRS, IMN, UMR 5293, 33000, Bordeaux, France. .,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France. .,IMN Institut des Maladies Neurodégénératives UMR 5293, Team 5: GIN Groupe d'imagerie Neurofonctionnelle, CEA-CNRS, Université de Bordeaux, Centre Broca Nouvelle-Aquitaine-3ème étage, 146 rue Léo-Saignat-CS 61292-Case 28, 33076, Bordeaux CEDEX, France.
| | - L Labache
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France.,University of Bordeaux, IMB, UMR 5251, 33405, Talence, France.,INRIA Bordeaux Sud-Ouest, CQFD, INRIA, UMR 5251, 33405, Talence, France
| | - M Joliot
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| | - N Tzourio-Mazoyer
- University of Bordeaux, IMN, UMR 5293, 33000, Bordeaux, France.,CNRS, IMN, UMR 5293, 33000, Bordeaux, France.,CEA, GIN, IMN, UMR 5293, 33000, Bordeaux, France
| |
Collapse
|
23
|
Rabbani Q, Milsap G, Crone NE. The Potential for a Speech Brain-Computer Interface Using Chronic Electrocorticography. Neurotherapeutics 2019; 16:144-165. [PMID: 30617653 PMCID: PMC6361062 DOI: 10.1007/s13311-018-00692-2] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
A brain-computer interface (BCI) is a technology that uses neural features to restore or augment the capabilities of its user. A BCI for speech would enable communication in real time via neural correlates of attempted or imagined speech. Such a technology would potentially restore communication and improve quality of life for locked-in patients and other patients with severe communication disorders. There have been many recent developments in neural decoders, neural feature extraction, and brain recording modalities facilitating BCI for the control of prosthetics and in automatic speech recognition (ASR). Indeed, ASR and related fields have developed significantly over the past years, and many lend many insights into the requirements, goals, and strategies for speech BCI. Neural speech decoding is a comparatively new field but has shown much promise with recent studies demonstrating semantic, auditory, and articulatory decoding using electrocorticography (ECoG) and other neural recording modalities. Because the neural representations for speech and language are widely distributed over cortical regions spanning the frontal, parietal, and temporal lobes, the mesoscopic scale of population activity captured by ECoG surface electrode arrays may have distinct advantages for speech BCI, in contrast to the advantages of microelectrode arrays for upper-limb BCI. Nevertheless, there remain many challenges for the translation of speech BCIs to clinical populations. This review discusses and outlines the current state-of-the-art for speech BCI and explores what a speech BCI using chronic ECoG might entail.
Collapse
Affiliation(s)
- Qinwan Rabbani
- Department of Electrical Engineering, The Johns Hopkins University Whiting School of Engineering, Baltimore, MD, USA.
| | - Griffin Milsap
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
24
|
Cannistraro RJ, Middlebrooks EH, Brinkmann BH, Feyissa AM. Paroxysmal epileptic laryngospasms. Neurol Clin Pract 2018; 8:e46-e48. [DOI: 10.1212/cpj.0000000000000554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 07/18/2018] [Indexed: 11/15/2022]
|
25
|
Nakamichi N, Takamoto K, Nishimaru H, Fujiwara K, Takamura Y, Matsumoto J, Noguchi M, Nishijo H. Cerebral Hemodynamics in Speech-Related Cortical Areas: Articulation Learning Involves the Inferior Frontal Gyrus, Ventral Sensory-Motor Cortex, and Parietal-Temporal Sylvian Area. Front Neurol 2018; 9:939. [PMID: 30443239 PMCID: PMC6221925 DOI: 10.3389/fneur.2018.00939] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2018] [Accepted: 10/16/2018] [Indexed: 12/31/2022] Open
Abstract
Although motor training programs have been applied to childhood apraxia of speech (AOS), the neural mechanisms of articulation learning are not well understood. To this aim, we recorded cerebral hemodynamic activity in the left hemisphere of healthy subjects (n = 15) during articulation learning. We used near-infrared spectroscopy (NIRS) while articulated voices were recorded and analyzed using spectrograms. The study consisted of two experimental sessions (modified and control sessions) in which participants were asked to repeat the articulation of the syllables "i-chi-ni" with and without an occlusal splint. This splint was used to increase the vertical dimension of occlusion to mimic conditions of articulation disorder. There were more articulation errors in the modified session, but number of errors were decreased in the final half of the modified session; this suggests that articulation learning took place. The hemodynamic NIRS data revealed significant activation during articulation in the frontal, parietal, and temporal cortices. These areas are involved in phonological processing and articulation planning and execution, and included the following areas: (i) the ventral sensory-motor cortex (vSMC), including the Rolandic operculum, precentral gyrus, and postcentral gyrus, (ii) the dorsal sensory-motor cortex, including the precentral and postcentral gyri, (iii) the opercular part of the inferior frontal gyrus (IFGoperc), (iv) the temporal cortex, including the superior temporal gyrus, and (v) the inferior parietal lobe (IPL), including the supramarginal and angular gyri. The posterior Sylvian fissure at the parietal-temporal boundary (area Spt) was selectively activated in the modified session. Furthermore, hemodynamic activity in the IFGoperc and vSMC was increased in the final half of the modified session compared with its initial half, and negatively correlated with articulation errors during articulation learning in the modified session. The present results suggest an essential role of the frontal regions, including the IFGoperc and vSMC, in articulation learning, with sensory feedback through area Spt and the IPL. The present study provides clues to the underlying pathology and treatment of childhood apraxia of speech.
Collapse
Affiliation(s)
- Naomi Nakamichi
- Department of Oral and Maxillofacial Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Kouichi Takamoto
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hiroshi Nishimaru
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Kumiko Fujiwara
- Department of Oral and Maxillofacial Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Yusaku Takamura
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Jumpei Matsumoto
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Makoto Noguchi
- Department of Oral and Maxillofacial Surgery, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| | - Hisao Nishijo
- System Emotional Science, Graduate School of Medicine and Pharmaceutical Sciences, University of Toyama, Toyama, Japan
| |
Collapse
|
26
|
An intracerebral exploration of functional connectivity during word production. J Comput Neurosci 2018; 46:125-140. [DOI: 10.1007/s10827-018-0699-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2018] [Revised: 09/25/2018] [Accepted: 09/28/2018] [Indexed: 12/31/2022]
|
27
|
Okada K, Matchin W, Hickok G. Phonological Feature Repetition Suppression in the Left Inferior Frontal Gyrus. J Cogn Neurosci 2018; 30:1549-1557. [PMID: 29877763 DOI: 10.1162/jocn_a_01287] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Models of speech production posit a role for the motor system, predominantly the posterior inferior frontal gyrus, in encoding complex phonological representations for speech production, at the phonemic, syllable, and word levels [Roelofs, A. A dorsal-pathway account of aphasic language production: The WEAVER++/ARC model. Cortex, 59(Suppl. C), 33-48, 2014; Hickok, G. Computational neuroanatomy of speech production. Nature Reviews Neuroscience, 13, 135-145, 2012; Guenther, F. H. Cortical interactions underlying the production of speech sounds. Journal of Communication Disorders, 39, 350-365, 2006]. However, phonological theory posits subphonemic units of representation, namely phonological features [Chomsky, N., & Halle, M. The sound pattern of English, 1968; Jakobson, R., Fant, G., & Halle, M. Preliminaries to speech analysis. The distinctive features and their correlates. Cambridge, MA: MIT Press, 1951], that specify independent articulatory parameters of speech sounds, such as place and manner of articulation. Therefore, motor brain systems may also incorporate phonological features into speech production planning units. Here, we add support for such a role with an fMRI experiment of word sequence production using a phonemic similarity manipulation. We adapted and modified the experimental paradigm of Oppenheim and Dell [Oppenheim, G. M., & Dell, G. S. Inner speech slips exhibit lexical bias, but not the phonemic similarity effect. Cognition, 106, 528-537, 2008; Oppenheim, G. M., & Dell, G. S. Motor movement matters: The flexible abstractness of inner speech. Memory & Cognition, 38, 1147-1160, 2010]. Participants silently articulated words cued by sequential visual presentation that varied in degree of phonological feature overlap in consonant onset position: high overlap (two shared phonological features; e.g., /r/ and /l/) or low overlap (one shared phonological feature, e.g., /r/ and /b/). We found a significant repetition suppression effect in the left posterior inferior frontal gyrus, with increased activation for phonologically dissimilar words compared with similar words. These results suggest that phonemes, particularly phonological features, are part of the planning units of the motor speech system.
Collapse
|
28
|
Chesters J, Möttönen R, Watkins KE. Transcranial direct current stimulation over left inferior frontal cortex improves speech fluency in adults who stutter. Brain 2018; 141:1161-1171. [PMID: 29394325 PMCID: PMC6019054 DOI: 10.1093/brain/awy011] [Citation(s) in RCA: 34] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Revised: 10/29/2017] [Accepted: 12/01/2017] [Indexed: 11/24/2022] Open
Abstract
See Crinion (doi:10.1093/brain/awy075) for a scientific commentary on this article.Stuttering is a neurodevelopmental condition affecting 5% of children, and persisting in 1% of adults. Promoting lasting fluency improvement in adults who stutter is a particular challenge. Novel interventions to improve outcomes are of value, therefore. Previous work in patients with acquired motor and language disorders reported enhanced benefits of behavioural therapies when paired with transcranial direct current stimulation. Here, we report the results of the first trial investigating whether transcranial direct current stimulation can improve speech fluency in adults who stutter. We predicted that applying anodal stimulation to the left inferior frontal cortex during speech production with temporary fluency inducers would result in longer-lasting fluency improvements. Thirty male adults who stutter completed a randomized, double-blind, controlled trial of anodal transcranial direct current stimulation over left inferior frontal cortex. Fifteen participants received 20 min of 1-mA stimulation on five consecutive days while speech fluency was temporarily induced using choral and metronome-timed speech. The other 15 participants received the same speech fluency intervention with sham stimulation. Speech fluency during reading and conversation was assessed at baseline, before and after the stimulation on each day of the 5-day intervention, and at 1 and 6 weeks after the end of the intervention. Anodal stimulation combined with speech fluency training significantly reduced the percentage of disfluent speech measured 1 week after the intervention compared with fluency intervention alone. At 6 weeks after the intervention, this improvement was maintained during reading but not during conversation. Outcome scores at both post-intervention time points on a clinical assessment tool (the Stuttering Severity Instrument, version 4) also showed significant improvement in the group receiving transcranial direct current stimulation compared with the sham group, in whom fluency was unchanged from baseline. We conclude that transcranial direct current stimulation combined with behavioural fluency intervention can improve fluency in adults who stutter. Transcranial direct current stimulation thereby offers a potentially useful adjunct to future speech therapy interventions for this population, for whom fluency therapy outcomes are currently limited.
Collapse
Affiliation(s)
- Jennifer Chesters
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK
- School of Psychology, University of Nottingham, Nottingham, UK
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| |
Collapse
|
29
|
Human Sensorimotor Cortex Control of Directly Measured Vocal Tract Movements during Vowel Production. J Neurosci 2018; 38:2955-2966. [PMID: 29439164 DOI: 10.1523/jneurosci.2382-17.2018] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2017] [Revised: 01/27/2018] [Accepted: 01/29/2018] [Indexed: 11/21/2022] Open
Abstract
During speech production, we make vocal tract movements with remarkable precision and speed. Our understanding of how the human brain achieves such proficient control is limited, in part due to the challenge of simultaneously acquiring high-resolution neural recordings and detailed vocal tract measurements. To overcome this challenge, we combined ultrasound and video monitoring of the supralaryngeal articulators (lips, jaw, and tongue) with electrocorticographic recordings from the cortical surface of 4 subjects (3 female, 1 male) to investigate how neural activity in the ventral sensory-motor cortex (vSMC) relates to measured articulator movement kinematics (position, speed, velocity, acceleration) during the production of English vowels. We found that high-gamma activity at many individual vSMC electrodes strongly encoded the kinematics of one or more articulators, but less so for vowel formants and vowel identity. Neural population decoding methods further revealed the structure of kinematic features that distinguish vowels. Encoding of articulator kinematics was sparsely distributed across time and primarily occurred during the time of vowel onset and offset. In contrast, encoding was low during the steady-state portion of the vowel, despite sustained neural activity at some electrodes. Significant representations were found for all kinematic parameters, but speed was the most robust. These findings enabled by direct vocal tract monitoring demonstrate novel insights into the representation of articulatory kinematic parameters encoded in the vSMC during speech production.SIGNIFICANCE STATEMENT Speaking requires precise control and coordination of the vocal tract articulators (lips, jaw, and tongue). Despite the impressive proficiency with which humans move these articulators during speech production, our understanding of how the brain achieves such control is rudimentary, in part because the movements themselves are difficult to observe. By simultaneously measuring speech movements and the neural activity that gives rise to them, we demonstrate how neural activity in sensorimotor cortex produces complex, coordinated movements of the vocal tract.
Collapse
|
30
|
Syed MF, Lindquist MA, Pillai JJ, Agarwal S, Gujar SK, Choe AS, Caffo B, Sair HI. Dynamic Functional Connectivity States Between the Dorsal and Ventral Sensorimotor Networks Revealed by Dynamic Conditional Correlation Analysis of Resting-State Functional Magnetic Resonance Imaging. Brain Connect 2017; 7:635-642. [DOI: 10.1089/brain.2017.0533] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/06/2023] Open
Affiliation(s)
| | - Martin A. Lindquist
- Department of Biostatistics, Johns Hopkins University Bloomberg School of Public Heath, Baltimore, Maryland
| | - Jay J. Pillai
- Division of Neuroradiology, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Shruti Agarwal
- Division of Neuroradiology, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Sachin K. Gujar
- Division of Neuroradiology, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| | - Ann S. Choe
- F.M. Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, Baltimore, Maryland
| | - Brian Caffo
- Department of Biostatistics, Johns Hopkins University Bloomberg School of Public Heath, Baltimore, Maryland
| | - Haris I. Sair
- Division of Neuroradiology, Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, Baltimore, Maryland
| |
Collapse
|
31
|
Distinct Neural Activities in Premotor Cortex during Natural Vocal Behaviors in a New World Primate, the Common Marmoset (Callithrix jacchus). J Neurosci 2017; 36:12168-12179. [PMID: 27903726 DOI: 10.1523/jneurosci.1646-16.2016] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Revised: 10/06/2016] [Accepted: 10/12/2016] [Indexed: 11/21/2022] Open
Abstract
Although evidence from human studies has long indicated the crucial role of the frontal cortex in speech production, it has remained uncertain whether the frontal cortex in nonhuman primates plays a similar role in vocal communication. Previous studies of prefrontal and premotor cortices of macaque monkeys have found neural signals associated with cue- and reward-conditioned vocal production, but not with self-initiated or spontaneous vocalizations (Coudé et al., 2011; Hage and Nieder, 2013), which casts doubt on the role of the frontal cortex of the Old World monkeys in vocal communication. A recent study of marmoset frontal cortex observed modulated neural activities associated with self-initiated vocal production (Miller et al., 2015), but it did not delineate whether these neural activities were specifically attributed to vocal production or if they may result from other nonvocal motor activity such as orofacial motor movement. In the present study, we attempted to resolve these issues and examined single neuron activities in premotor cortex during natural vocal exchanges in the common marmoset (Callithrix jacchus), a highly vocal New World primate. Neural activation and suppression were observed both before and during self-initiated vocal production. Furthermore, by comparing neural activities between self-initiated vocal production and nonvocal orofacial motor movement, we identified a subpopulation of neurons in marmoset premotor cortex that was activated or suppressed by vocal production, but not by orofacial movement. These findings provide clear evidence of the premotor cortex's involvement in self-initiated vocal production in natural vocal behaviors of a New World primate. SIGNIFICANCE STATEMENT Human frontal cortex plays a crucial role in speech production. However, it has remained unclear whether the frontal cortex of nonhuman primates is involved in the production of self-initiated vocalizations during natural vocal communication. Using a wireless multichannel neural recording technique, we observed in the premotor cortex neural activation and suppression both before and during self-initiated vocalizations when marmosets, a highly vocal New World primate species, engaged in vocal exchanges with conspecifics. A novel finding of the present study is the discovery of a subpopulation of premotor cortex neurons that was activated by vocal production, but not by orofacial movement. These observations provide clear evidence of the premotor cortex's involvement in vocal production in a New World primate species.
Collapse
|
32
|
Jiang W, Pailla T, Dichter B, Chang EF, Gilja V. Decoding speech using the timing of neural signal modulation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:1532-1535. [PMID: 28268618 DOI: 10.1109/embc.2016.7591002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Brain-machine interfaces (BMIs) have great potential for applications that restore and assist communication for paralyzed individuals. Recently, BMIs decoding speech have gained considerable attention due to their potential for high information transfer rates. In this study, we propose a novel decoding approach based on hidden Markov models (HMMs) that uses the timing of neural signal changes to decode speech. We tested the decoder's performance by predicting vowels from electrocorticographic (ECoG) data of three human subjects. Our results show that timing-based features of ECoG signals are informative of vowel production and enable decoding accuracies significantly above the level of chance. This suggests that leveraging the temporal structure of neural activity to decode speech could play an important role towards developing highperformance, robust speech BMIs.
Collapse
|
33
|
Pailla T, Jiang W, Dichter B, Chang EF, Gilja V. ECoG data analyses to inform closed-loop BCI experiments for speech-based prosthetic applications. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2017; 2016:5713-5716. [PMID: 28269552 DOI: 10.1109/embc.2016.7592024] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Brain Computer Interfaces (BCIs) assist individuals with motor disabilities by enabling them to control prosthetic devices with their neural activity. Performance of closed-loop BCI systems can be improved by using design strategies that leverage structured and task-relevant neural activity. We use data from high density electrocorticography (ECoG) grids implanted in three subjects to study sensory-motor activity during an instructed speech task in which the subjects vocalized three cardinal vowel phonemes. We show how our findings relate to the current understanding of speech physiology and functional organization of human sensory-motor cortex. We investigate the effect of behavioral variations on parameters and performance of the decoding model. Our analyses suggest experimental design strategies that may be critical for speech-based BCI performance.
Collapse
|
34
|
The origins of the vocal brain in humans. Neurosci Biobehav Rev 2017; 77:177-193. [DOI: 10.1016/j.neubiorev.2017.03.014] [Citation(s) in RCA: 76] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Revised: 02/15/2017] [Accepted: 03/22/2017] [Indexed: 01/13/2023]
|
35
|
Remijn GB, Kikuchi M, Yoshimura Y, Shitamichi K, Ueno S, Tsubokawa T, Kojima H, Higashida H, Minabe Y. A Near-Infrared Spectroscopy Study on Cortical Hemodynamic Responses to Normal and Whispered Speech in 3- to 7-Year-Old Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:465-470. [PMID: 28114676 DOI: 10.1044/2016_jslhr-h-15-0435] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/18/2015] [Accepted: 07/24/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE The purpose of this study was to assess cortical hemodynamic response patterns in 3- to 7-year-old children listening to two speech modes: normally vocalized and whispered speech. Understanding whispered speech requires processing of the relatively weak, noisy signal, as well as the cognitive ability to understand the speaker's reason for whispering. METHOD Near-infrared spectroscopy (NIRS) was used to assess changes in cortical oxygenated hemoglobin from 16 typically developing children. RESULTS A profound difference in oxygenated hemoglobin levels between the speech modes was found over left ventral sensorimotor cortex. In particular, over areas that represent speech articulatory body parts and motion, such as the larynx, lips, and jaw, oxygenated hemoglobin was higher for whisper than for normal speech. The weaker stimulus, in terms of sound energy, thus induced the more profound hemodynamic response. This, moreover, occurred over areas involved in speech articulation, even though the children did not overtly articulate speech during measurements. CONCLUSION Because whisper is a special form of communication not often used in daily life, we suggest that the hemodynamic response difference over left ventral sensorimotor cortex resulted from inner (covert) practice or imagination of the different articulatory actions necessary to produce whisper as opposed to normal speech.
Collapse
Affiliation(s)
| | - Mitsuru Kikuchi
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Yuko Yoshimura
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Kiyomi Shitamichi
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Sanae Ueno
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Tsunehisa Tsubokawa
- Department of Anesthesiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| | - Haruyuki Kojima
- Department of Psychology, Kanazawa University, Kanazawa, Japan
| | - Haruhiro Higashida
- Research Center for Child Mental Development, Kanazawa University, Kanazawa, Japan
| | - Yoshio Minabe
- Department of Psychiatry and Neurobiology, Graduate School of Medical Science, Kanazawa University, Kanazawa, Japan
| |
Collapse
|
36
|
A contemporary framework of language processing in the human brain in the context of preoperative and intraoperative language mapping. Neuroradiology 2016; 59:69-87. [PMID: 28005160 DOI: 10.1007/s00234-016-1772-0] [Citation(s) in RCA: 44] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2016] [Accepted: 12/05/2016] [Indexed: 02/06/2023]
Abstract
INTRODUCTION The emergence of advanced in vivo neuroimaging methods has redefined the understanding of brain function with a shift from traditional localizationist models to more complex and widely distributed neural networks. In human language processing, the traditional localizationist models of Wernicke and Broca have fallen out of favor for a dual-stream processing system involving complex networks organized over vast areas of the dominant hemisphere. The current review explores the cortical function and white matter connections of human language processing, as well as their relevance to surgical planning. METHODS We performed a systematic review of the literature with narrative data analysis. RESULTS Although there is significant heterogeneity in the literature over the past century of exploration, modern evidence provides new insight into the true cortical function and white matter anatomy of human language. Intraoperative data and postoperative outcome studies confirm a widely distributed language network extending far beyond the traditional cortical areas of Wernicke and Broca. CONCLUSIONS The anatomic distribution of language networks, based on current theories, is explored to present a modern and clinically relevant interpretation of language function. Within this framework, we present current knowledge regarding the known effects of damage to both cortical and subcortical components of these language networks. Ideally, we hope this framework will provide a common language for which to base future clinical studies in human language function.
Collapse
|
37
|
Kleber B, Friberg A, Zeitouni A, Zatorre R. Experience-dependent modulation of right anterior insula and sensorimotor regions as a function of noise-masked auditory feedback in singers and nonsingers. Neuroimage 2016; 147:97-110. [PMID: 27916664 DOI: 10.1016/j.neuroimage.2016.11.059] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2016] [Revised: 11/11/2016] [Accepted: 11/23/2016] [Indexed: 12/20/2022] Open
Abstract
Previous studies on vocal motor production in singing suggest that the right anterior insula (AI) plays a role in experience-dependent modulation of feedback integration. Specifically, when somatosensory input was reduced via anesthesia of the vocal fold mucosa, right AI activity was down regulated in trained singers. In the current fMRI study, we examined how masking of auditory feedback affects pitch-matching accuracy and corresponding brain activity in the same participants. We found that pitch-matching accuracy was unaffected by masking in trained singers yet declined in nonsingers. The corresponding brain region with the most differential and interesting activation pattern was the right AI, which was up regulated during masking in singers but down regulated in nonsingers. Likewise, its functional connectivity with inferior parietal, frontal, and voice-relevant sensorimotor areas was increased in singers yet decreased in nonsingers. These results indicate that singers relied more on somatosensory feedback, whereas nonsingers depended more critically on auditory feedback. When comparing auditory vs somatosensory feedback involvement, the right anterior insula emerged as the only region for correcting intended vocal output by modulating what is heard or felt as a function of singing experience. We propose the right anterior insula as a key node in the brain's singing network for the integration of signals of salience across multiple sensory and cognitive domains to guide vocal behavior.
Collapse
Affiliation(s)
- Boris Kleber
- McGill University - Montreal Neurological Institute, Neuropsychology and Cognitive Neuroscience, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound research (BRAMS), Montreal, QC, Canada; Institut für Medizinische Psychologie und Verhaltensneurobiologie, Universität Tübingen, Tübingen, Germany; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark.
| | - Anders Friberg
- Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden
| | - Anthony Zeitouni
- Department of Otolaryngology-Head and Neck Surgery, MUHC-Royal Victoria Hospital, McGill University, Montreal, QC, Canada
| | - Robert Zatorre
- McGill University - Montreal Neurological Institute, Neuropsychology and Cognitive Neuroscience, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
38
|
Desmurget M, Sirigu A. Revealing humans' sensorimotor functions with electrical cortical stimulation. Philos Trans R Soc Lond B Biol Sci 2016; 370:20140207. [PMID: 26240422 DOI: 10.1098/rstb.2014.0207] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Direct electrical stimulation (DES) of the human brain has been used by neurosurgeons for almost a century. Although this procedure serves only clinical purposes, it generates data that have a great scientific interest. Had DES not been employed, our comprehension of the organization of the sensorimotor systems involved in movement execution, language production, the emergence of action intentionality or the subjective feeling of movement awareness would have been greatly undermined. This does not mean, of course, that DES is a gold standard devoid of limitations and that other approaches are not of primary importance, including electrophysiology, modelling, neuroimaging or psychophysics in patients and healthy subjects. Rather, this indicates that the contribution of DES cannot be restricted, in humans, to the ubiquitous concepts of homunculus and somatotopy. DES is a fundamental tool in our attempt to understand the human brain because it represents a unique method for mapping sensorimotor pathways and interfering with the functioning of localized neural populations during the performance of well-defined behavioural tasks.
Collapse
Affiliation(s)
- Michel Desmurget
- Centre de Neuroscience Cognitive, CNRS, UMR 5229, 67 boulevard Pinel, Bron 69500, France Université Claude Bernard, Lyon 1, 43 boulevard du 11 novembre 1918, Villeurbanne 69100, France
| | - Angela Sirigu
- Centre de Neuroscience Cognitive, CNRS, UMR 5229, 67 boulevard Pinel, Bron 69500, France Université Claude Bernard, Lyon 1, 43 boulevard du 11 novembre 1918, Villeurbanne 69100, France
| |
Collapse
|
39
|
Kent RD. Nonspeech Oral Movements and Oral Motor Disorders: A Narrative Review. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2015; 24:763-89. [PMID: 26126128 PMCID: PMC4698470 DOI: 10.1044/2015_ajslp-14-0179] [Citation(s) in RCA: 64] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/10/2014] [Revised: 04/02/2015] [Accepted: 06/13/2015] [Indexed: 05/25/2023]
Abstract
PURPOSE Speech and other oral functions such as swallowing have been compared and contrasted with oral behaviors variously labeled quasispeech, paraspeech, speechlike, and nonspeech, all of which overlap to some degree in neural control, muscles deployed, and movements performed. Efforts to understand the relationships among these behaviors are hindered by the lack of explicit and widely accepted definitions. This review article offers definitions and taxonomies for nonspeech oral movements and for diverse speaking tasks, both overt and covert. METHOD Review of the literature included searches of Medline, Google Scholar, HighWire Press, and various online sources. Search terms pertained to speech, quasispeech, paraspeech, speechlike, and nonspeech oral movements. Searches also were carried out for associated terms in oral biology, craniofacial physiology, and motor control. RESULTS AND CONCLUSIONS Nonspeech movements have a broad spectrum of clinical applications, including developmental speech and language disorders, motor speech disorders, feeding and swallowing difficulties, obstructive sleep apnea syndrome, trismus, and tardive stereotypies. The role and benefit of nonspeech oral movements are controversial in many oral motor disorders. It is argued that the clinical value of these movements can be elucidated through careful definitions and task descriptions such as those proposed in this review article.
Collapse
Affiliation(s)
- Ray D. Kent
- Waisman Center, University of Wisconsin–Madison
| |
Collapse
|
40
|
Neumann N, Lotze M, Eickhoff SB. Cognitive Expertise: An ALE Meta-Analysis. Hum Brain Mapp 2015; 37:262-72. [PMID: 26467981 DOI: 10.1002/hbm.23028] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2015] [Revised: 10/02/2015] [Accepted: 10/05/2015] [Indexed: 12/17/2022] Open
Abstract
Expert performance constitutes the endpoint of skill acquisition and is accompanied by widespread neuroplastic changes. To reveal common mechanisms of reorganization associated with long-term expertise in a cognitive domain (mental calculation, chess, language, memory, music without motor involvement), we used activation likelihood estimation meta-analysis and compared brain activation of experts to nonexperts. Twenty-six studies matched inclusion criteria, most of which reported an increase and not a decrease of activation foci in experts. Increased activation occurred in the left rolandic operculum (OP 4) and left primary auditory cortex and in bilateral premotor cortex in studies that used auditory stimulation. In studies with visual stimulation, experts showed enhanced activation in the right inferior parietal cortex (area PGp) and the right lingual gyrus. Experts' brain activation patterns seem to be characterized by enhanced or additional activity in domain-specific primary, association, and motor structures, confirming that learning is localized and very specialized.
Collapse
Affiliation(s)
- Nicola Neumann
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Martin Lotze
- Institute of Diagnostic Radiology and Neuroradiology, Functional Imaging Unit, Ernst-Moritz-Arndt-University of Greifswald, Greifswald, Germany
| | - Simon B Eickhoff
- Cognitive Neuroscience Group, Institute of Clinical Neuroscience and Medical Psychology, Heinrich-Heine University, Düsseldorf, Germany.,Brain Network Modeling Group, Institute of Neuroscience and Medicine (INM-1), Research Center Jülich, Jülich, Germany
| |
Collapse
|
41
|
Breshears JD, Molinaro AM, Chang EF. A probabilistic map of the human ventral sensorimotor cortex using electrical stimulation. J Neurosurg 2015; 123:340-9. [DOI: 10.3171/2014.11.jns14889] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECT
The human ventral sensorimotor cortex (vSMC) is involved in facial expression, mastication, and swallowing, as well as the dynamic and highly coordinated movements of human speech production. However, vSMC organization remains poorly understood, and previously published population-driven maps of its somatotopy do not accurately reflect the variability across individuals in a quantitative, probabilistic fashion. The goal of this study was to describe the responses to electrical stimulation of the vSMC, generate probabilistic maps of function in the vSMC, and quantify the variability across individuals.
METHODS
Photographic, video, and stereotactic MRI data of intraoperative electrical stimulation of the vSMC were collected for 33 patients undergoing awake craniotomy. Stimulation sites were converted to a 2D coordinate system based on anatomical landmarks. Motor, sensory, and speech stimulation responses were reviewed and classified. Probabilistic maps of stimulation responses were generated, and spatial variance was quantified.
RESULTS
In 33 patients, the authors identified 194 motor, 212 sensory, 61 speech-arrest, and 27 mixed responses. Responses were complex, stereotyped, and mostly nonphysiological movements, involving hand, orofacial, and laryngeal musculature. Within individuals, the presence of oral movement representations varied; however, the dorsal-ventral order was always preserved. The most robust motor responses were jaw (probability 0.85), tongue (0.64), lips (0.58), and throat (0.52). Vocalizations were seen in 6 patients (0.18), more dorsally near lip and dorsal throat areas. Sensory responses were spatially dispersed; however, patients' subjective reports were highly precise in localization within the mouth. The most robust responses included tongue (0.82) and lips (0.42). The probability of speech arrest was 0.85, highest 15–20 mm anterior to the central sulcus and just dorsal to the sylvian fissure, in the anterior precentral gyrus or pars opercularis.
CONCLUSIONS
The authors report probabilistic maps of function in the human vSMC based on intraoperative cortical electrical stimulation. These results define the expected range of mapping outcomes in the vSMC of a single individual and shed light on the functional organization of the vSMC supporting speech motor control and nonspeech functions.
Collapse
Affiliation(s)
| | | | - Edward F. Chang
- Departments of 1Neurological Surgery,
- 3Physiology, and
- 4Center for Integrative Neuroscience, University of California, San Francisco; and
- 5Center for Neural Engineering and Prostheses, University of California, Berkeley and San Francisco, California
| |
Collapse
|
42
|
Krishnan S, Leech R, Mercure E, Lloyd-Fox S, Dick F. Convergent and Divergent fMRI Responses in Children and Adults to Increasing Language Production Demands. Cereb Cortex 2014; 25:3261-77. [PMID: 24907249 PMCID: PMC4585486 DOI: 10.1093/cercor/bhu120] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
In adults, patterns of neural activation associated with perhaps the most basic language skill—overt object naming—are extensively modulated by the psycholinguistic and visual complexity of the stimuli. Do children's brains react similarly when confronted with increasing processing demands, or they solve this problem in a different way? Here we scanned 37 children aged 7–13 and 19 young adults who performed a well-normed picture-naming task with 3 levels of difficulty. While neural organization for naming was largely similar in childhood and adulthood, adults had greater activation in all naming conditions over inferior temporal gyri and superior temporal gyri/supramarginal gyri. Manipulating naming complexity affected adults and children quite differently: neural activation, especially over the dorsolateral prefrontal cortex, showed complexity-dependent increases in adults, but complexity-dependent decreases in children. These represent fundamentally different responses to the linguistic and conceptual challenges of a simple naming task that makes no demands on literacy or metalinguistics. We discuss how these neural differences might result from different cognitive strategies used by adults and children during lexical retrieval/production as well as developmental changes in brain structure and functional connectivity.
Collapse
Affiliation(s)
- Saloni Krishnan
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Robert Leech
- Department of Neurosciences and Mental Health, Imperial College London, London, UK
| | | | - Sarah Lloyd-Fox
- Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| | - Frederic Dick
- Birkbeck-UCL Centre for NeuroImaging, London, UK Centre for Brain and Cognitive Development, Birkbeck, University of London, London, UK
| |
Collapse
|
43
|
Bouchard KE, Chang EF. Neural decoding of spoken vowels from human sensory-motor cortex with high-density electrocorticography. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2014; 2014:6782-6785. [PMID: 25571553 DOI: 10.1109/embc.2014.6945185] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We present the first demonstration of single-trial neural decoding of vowel acoustic features during speech production with high performance. The ability to predict trial-by-trial fluctuations in speech production was facilitated by using high-density, large-area electrocorticography (ECoG) combined with an adaptive principal components regression. In experiments from two human neurosurgical patients with a high-density 256-channel ECoG grid implanted over speech cortices, we demonstrate that as much as 81% of the acoustic variability across vowels could be accurately predicted from the spatial patterns of neural activity during speech production. These results demonstrate continuous, single-trial decoding of vowel acoustics.
Collapse
|