1
|
Meyer AM, Snider SF, Faria AV, Tippett DC, Saloma R, Turkeltaub PE, Hillis AE, Friedman RB. Cortical and Behavioral Correlates of Alexia in Primary Progressive Aphasia and Alzheimer's Disease. Neuropsychologia 2025; 207:109066. [PMID: 39756511 DOI: 10.1016/j.neuropsychologia.2025.109066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2024] [Revised: 01/02/2025] [Accepted: 01/03/2025] [Indexed: 01/07/2025]
Abstract
The underlying causes of reading impairment in neurodegenerative disease are not well understood. The current study seeks to determine the causes of surface alexia and phonological alexia in primary progressive aphasia (PPA) and typical (amnestic) Alzheimer's disease (AD). Participants included 24 with the logopenic variant (lvPPA), 17 with the nonfluent/agrammatic variant (nfvPPA), 12 with the semantic variant (svPPA), 19 with unclassifiable PPA (uPPA), and 16 with AD. Measures of Surface Alexia and Phonological Alexia were computed by subtracting control-condition word reading accuracy from irregular word reading and pseudoword reading accuracy, respectively. Cases of Surface Alexia were common in svPPA, lvPPA, uPPA, and AD, but not in nfvPPA. At the subgroup level, average Surface Alexia was significantly higher in svPPA, lvPPA, and uPPA, compared to unimpaired age-matched controls. Cases of Phonological Alexia were common in nfvPPA, lvPPA, and uPPA, and average Phonological Alexia was significantly higher in these subgroups, compared to unimpaired age-matched controls. Behavioral regression results indicated that Surface Alexia can be predicted by impairment in the lexical-semantic processing of nouns, suggesting that a lexical-semantic deficit is required for the development of surface alexia, while cortical volume regression results indicated that Surface Alexia can be predicted by reduced volume in the left Superior Temporal Pole, which has been associated with conceptual-semantic processing. Behavioral regression results indicated that Phonological Alexia can be predicted by impairment on Pseudoword Repetition, suggesting that this type of reading difficulty may be due to impaired phonological processing. The cortical volume regression results suggested that Phonological Alexia can be predicted by reduced volume within the left Inferior Temporal Gyrus and the left Angular Gyrus, areas that are associated with lexical-semantic processing and phonological processing, respectively.
Collapse
Affiliation(s)
- Aaron M Meyer
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center.
| | - Sarah F Snider
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Donna C Tippett
- Department of Physical Medicine and Rehabilitation, Johns Hopkins University; Department of Neurology, Johns Hopkins University; Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University
| | - Ryan Saloma
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | - Peter E Turkeltaub
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| | | | - Rhonda B Friedman
- Center for Aphasia Research and Rehabilitation, Georgetown University Medical Center
| |
Collapse
|
2
|
Roelofs A. Wernicke's functional neuroanatomy model of language turns 150: what became of its psychological reflex arcs? Brain Struct Funct 2024; 229:2079-2096. [PMID: 38581582 PMCID: PMC11611947 DOI: 10.1007/s00429-024-02785-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 03/05/2024] [Indexed: 04/08/2024]
Abstract
Wernicke (Der aphasische Symptomencomplex: Eine psychologische Studie auf anatomischer Basis. Cohn und Weigert, Breslau. https://wellcomecollection.org/works/dwv5w9rw , 1874) proposed a model of the functional neuroanatomy of spoken word repetition, production, and comprehension. At the heart of this epoch-making model are psychological reflex arcs underpinned by fiber tracts connecting sensory to motor areas. Here, I evaluate the central assumption of psychological reflex arcs in light of what we have learned about language in the brain during the past 150 years. I first describe Wernicke's 1874 model and the evidence he presented for it. Next, I discuss his updates of the model published in 1886 and posthumously in 1906. Although the model had an enormous immediate impact, it lost influence after the First World War. Unresolved issues included the anatomical underpinnings of the psychological reflex arcs, the role of auditory images in word production, and the sufficiency of psychological reflex arcs, which was questioned by Wundt (Grundzüge der physiologischen Psychologie. Engelmann, Leipzig. http://vlp.mpiwg-berlin.mpg.de/references?id=lit46 , 1874; Grundzüge der physiologischen Psychologie (Vol. 1, 5th ed.). Engelmann, Leipzig. http://vlp.mpiwg-berlin.mpg.de/references?id=lit806 , 1902). After a long dormant period, Wernicke's model was revived by Geschwind (Science 170:940-944. https://doi.org/10.1126/science.170.3961.940 , 1970; Selected papers on language and the brain. Reidel, Dordrecht, 1974), who proposed a version of it that differed in several important respects from Wernicke's original. Finally, I describe how new evidence from modern research has led to a novel view on language in the brain, supplementing contemporary equivalents of psychological reflex arcs by other mechanisms such as attentional control and assuming different neuroanatomical underpinnings. In support of this novel view, I report new analyses of patient data and computer simulations using the WEAVER++/ARC model (Roelofs 2014, 2022) that incorporates attentional control and integrates the new evidence.
Collapse
Affiliation(s)
- Ardi Roelofs
- Donders Institute for Brain, Cognition and Behaviour, Centre for Cognition, Radboud University, Thomas van Aquinostraat 4, 6525 GD, Nijmegen, The Netherlands.
| |
Collapse
|
3
|
Sheng G, Kuang M, Yang R, Zou Y. Association of metabolic score for insulin resistance with progression or regression of prediabetes: evidence from a multicenter Chinese medical examination cohort study. Front Endocrinol (Lausanne) 2024; 15:1388751. [PMID: 39600950 PMCID: PMC11589820 DOI: 10.3389/fendo.2024.1388751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Accepted: 10/23/2024] [Indexed: 11/29/2024] Open
Abstract
Objective Few studies have evaluated the changes in blood glucose status in individuals with prediabetes, and this study aimed to analyze the association between metabolic score for insulin resistance (MetS-IR) and the progression or regression of prediabetes. Methods This retrospective cohort study used research data from medical examination institutions under the Rich Healthcare Group in 32 regions across 11 cities in China. Progression of prediabetes to diabetes and regression to normal fasting glucose (NFG) were defined based on glycemic changes during follow-up. The association between MetS-IR and the progression or regression of prediabetes was analyzed using multivariate Cox regression, restricted cubic splines, and piecewise regression models. Results Data from 15,421 prediabetic subjects were analyzed. Over an average follow-up of 2.96 years, 6,481 individuals (42.03%) returned to NFG, and 2,424 (15.72%) progressed to diabetes. After controlling for confounding factors, an increase in MetS-IR was observed to increase the risk of diabetes onset in the prediabetic population, whereas a decrease in MetS-IR had a protective effect for returning to NFG. Additionally, a nonlinear relationship between MetS-IR and prediabetes regression was observed, with 37.22 identified as the inflection point; prediabetes regression rates were significantly higher before this point and markedly decreased after it. Conclusion For individuals with prediabetes, an increase in MetS-IR may lead to an increased risk of diabetes; conversely, a decrease in MetS-IR enhances the protective effect for returning to NFG and keeping MetS-IR below 37.22 is significant for the regression of prediabetes.
Collapse
Affiliation(s)
- Guotai Sheng
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi, China
| | - Maobin Kuang
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi, China
| | - Ruijuan Yang
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi, China
- Department of Endocrinology, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi, China
| | - Yang Zou
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People’s Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi, China
| |
Collapse
|
4
|
Zou Y, Lu S, Li D, Huang X, Wang C, Xie G, Duan L, Yang H. Exposure of cumulative atherogenic index of plasma and the development of prediabetes in middle-aged and elderly individuals: evidence from the CHARLS cohort study. Cardiovasc Diabetol 2024; 23:355. [PMID: 39350154 PMCID: PMC11443941 DOI: 10.1186/s12933-024-02449-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/07/2024] [Accepted: 09/19/2024] [Indexed: 10/04/2024] Open
Abstract
BACKGROUND The impact of dynamic changes in the degree of atherosclerosis on the development of prediabetes remains unclear. This study aims to investigate the association between cumulative atherogenic index of plasma (CumAIP) exposure during follow-up and the development of prediabetes in middle-aged and elderly individuals. METHODS A total of 2,939 prediabetic participants from the first wave of the China Health and Retirement Longitudinal Study (CHARLS) were included. The outcomes for these patients, including progression to diabetes and regression to normal fasting glucose (NFG), were determined using data from the third wave. CumAIP was calculated as the ratio of the average AIP values measured during the first and third waves to the total exposure duration. The association between CumAIP and the development of prediabetes was analyzed using multivariable Cox regression and restricted cubic spline (RCS) regression. RESULTS During a median follow-up period of 3 years, 15.21% of prediabetic patients progressed to diabetes, and 22.12% regressed to NFG. Among the groups categorized by CumAIP quartiles, the proportion of prediabetes progressing to diabetes gradually increased (Q1: 10.61%, Q2: 13.62%, Q3: 15.65%, Q4: 20.95%), while the proportion regressing to NFG gradually decreased (Q1: 23.54%, Q2: 23.71%, Q3: 22.18%, Q4: 19.05%). Multivariable-adjusted Cox regression showed a significant positive linear correlation between high CumAIP exposure and prediabetes progression, and a significant negative linear correlation with prediabetes regression. Furthermore, in a stratified analysis, it was found that compared to married individuals, those who were unmarried (including separated, divorced, widowed, or never married) had a relatively higher risk of CumAIP-related diabetes. CONCLUSION CumAIP is closely associated with the development of prediabetes. High CumAIP exposure not only increases the risk of prediabetes progression but also hinders its regression within a certain range. These findings suggest that monitoring and maintaining appropriate AIP levels may help prevent the deterioration of blood glucose levels.
Collapse
Affiliation(s)
- Yang Zou
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Song Lu
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
| | - Dongdong Li
- Department of Ultrasound, The Second Affiliated Hospital of Nanchang University, Nanchang, Jiangxi Province, China
| | - Xin Huang
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
| | - Chao Wang
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
| | - Guobo Xie
- Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Lihua Duan
- Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China.
- Jiangxi Province Key Laboratory of Immunity and Inflammation, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China.
| | - Hongyi Yang
- Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China.
| |
Collapse
|
5
|
Hsieh JK, Prakash PR, Flint RD, Fitzgerald Z, Mugler E, Wang Y, Crone NE, Templer JW, Rosenow JM, Tate MC, Betzel R, Slutzky MW. Cortical sites critical to language function act as connectors between language subnetworks. Nat Commun 2024; 15:7897. [PMID: 39284848 PMCID: PMC11405775 DOI: 10.1038/s41467-024-51839-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2023] [Accepted: 08/15/2024] [Indexed: 09/20/2024] Open
Abstract
Historically, eloquent functions have been viewed as localized to focal areas of human cerebral cortex, while more recent studies suggest they are encoded by distributed networks. We examined the network properties of cortical sites defined by stimulation to be critical for speech and language, using electrocorticography from sixteen participants during word-reading. We discovered distinct network signatures for sites where stimulation caused speech arrest and language errors. Both demonstrated lower local and global connectivity, whereas sites causing language errors exhibited higher inter-community connectivity, identifying them as connectors between modules in the language network. We used machine learning to classify these site types with reasonably high accuracy, even across participants, suggesting that a site's pattern of connections within the task-activated language network helps determine its importance to function. These findings help to bridge the gap in our understanding of how focal cortical stimulation interacts with complex brain networks to elicit language deficits.
Collapse
Affiliation(s)
- Jason K Hsieh
- Department of Neurosurgery, Cleveland Clinic Foundation, Cleveland, OH, 44195, USA
- Department of Neurosurgery, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Prashanth R Prakash
- Department of Biomedical Engineering, Northwestern University, Chicago, IL, 60611, USA
| | - Robert D Flint
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Zachary Fitzgerald
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Emily Mugler
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Yujing Wang
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Nathan E Crone
- Department of Neurology, Johns Hopkins University School of Medicine, Baltimore, MD, 21287, USA
| | - Jessica W Templer
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Joshua M Rosenow
- Department of Neurosurgery, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Matthew C Tate
- Department of Neurosurgery, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA
| | - Richard Betzel
- Department of Psychological and Brain Sciences, Cognitive Science Program, Program in Neuroscience, and Network Science Institute, Indiana University, Bloomington, IN, 47401, USA
| | - Marc W Slutzky
- Department of Biomedical Engineering, Northwestern University, Chicago, IL, 60611, USA.
- Department of Neurology, Northwestern University Feinberg School of Medicine, Chicago, IL, 60611, USA.
- Department of Neuroscience, Northwestern University, Chicago, IL, 60611, USA.
- Department of Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, 60611, USA.
| |
Collapse
|
6
|
Wang A, Yan X, Feng G, Cao F. Shared and task-specific brain functional differences across multiple tasks in children with developmental dyslexia. Neuropsychologia 2024; 201:108935. [PMID: 38848989 DOI: 10.1016/j.neuropsychologia.2024.108935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 06/04/2024] [Accepted: 06/05/2024] [Indexed: 06/09/2024]
Abstract
Different tasks have been used in examining the neural functional differences associated with developmental dyslexia (DD), and consequently, different findings have been reported. However, very few studies have systematically compared multiple tasks in understanding what specific task differences each brain region is associated with. In this study, we employed an auditory rhyming task, a visual rhyming task, and a visual spelling task, in order to investigate shared and task-specific neural differences in Chinese children with DD. First, we found that children with DD had reduced activation in the opercular part of the left inferior frontal gyrus (IFG) only in the two rhyming tasks, suggesting impaired phonological analysis. Children with DD showed functional differences in the right lingual gyrus/inferior occipital gyrus only in the two visual tasks, suggesting deficiency in their visuo-orthographic processing. Moreover, children with DD showed reduced activation in the left dorsal inferior frontal gyrus and increased activation in the right precentral gyrus across all of the three tasks, suggesting neural signatures of DD in Chinese. In summary, our study successfully separated brain regions associated with differences in orthographic processing, phonological processing, and general lexical processing in DD. It advances our understanding about the neural mechanisms of DD.
Collapse
Affiliation(s)
- Anqi Wang
- Department of Psychology, Sun Yat-Sen University, China
| | - Xiaohui Yan
- Department of Psychology, the University of Hong Kong, China; State Key Lab of Brain and Cognitive Sciences, the University of Hong Kong, China
| | - Guoyan Feng
- Department of Psychology, Sun Yat-Sen University, China; School of Management, Guangzhou Xinhua University, China
| | - Fan Cao
- Department of Psychology, the University of Hong Kong, China; State Key Lab of Brain and Cognitive Sciences, the University of Hong Kong, China.
| |
Collapse
|
7
|
Beach SD, Tang DL, Kiran S, Niziolek CA. Pars Opercularis Underlies Efferent Predictions and Successful Auditory Feedback Processing in Speech: Evidence From Left-Hemisphere Stroke. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:454-483. [PMID: 38911464 PMCID: PMC11192514 DOI: 10.1162/nol_a_00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/07/2024] [Indexed: 06/25/2024]
Abstract
Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere motor planning regions are thought to give rise to efferent predictions that can be compared to true feedback in sensory cortices, resulting in neural suppression commensurate with the degree of overlap between predicted and actual sensations. Sensory prediction errors thus serve as a possible mechanism of detection of deviant speech sounds, which can then feed back into corrective action, allowing for online control of speech acoustics. The goal of this study was to assess the integrity of this detection-correction circuit in persons with aphasia (PWA) whose left-hemisphere lesions may limit their ability to control variability in speech output. We recorded magnetoencephalography (MEG) while 15 PWA and age-matched controls spoke monosyllabic words and listened to playback of their utterances. From this, we measured speaking-induced suppression of the M100 neural response and related it to lesion profiles and speech behavior. Both speaking-induced suppression and cortical sensitivity to deviance were preserved at the group level in PWA. PWA with more spared tissue in pars opercularis had greater left-hemisphere neural suppression and greater behavioral correction of acoustically deviant pronunciations, whereas sparing of superior temporal gyrus was not related to neural suppression or acoustic behavior. In turn, PWA who made greater corrections had fewer overt speech errors in the MEG task. Thus, the motor planning regions that generate the efferent prediction are integral to performing corrections when that prediction is violated.
Collapse
Affiliation(s)
| | - Ding-lan Tang
- Waisman Center, The University of Wisconsin–Madison
- Academic Unit of Human Communication, Development, and Information Sciences, University of Hong Kong, Hong Kong, SAR China
| | - Swathi Kiran
- Department of Speech, Language & Hearing Sciences, Boston University
| | - Caroline A. Niziolek
- Waisman Center, The University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, The University of Wisconsin–Madison
| |
Collapse
|
8
|
Angrick M, Luo S, Rabbani Q, Candrea DN, Shah S, Milsap GW, Anderson WS, Gordon CR, Rosenblatt KR, Clawson L, Tippett DC, Maragakis N, Tenore FV, Fifer MS, Hermansky H, Ramsey NF, Crone NE. Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS. Sci Rep 2024; 14:9617. [PMID: 38671062 PMCID: PMC11053081 DOI: 10.1038/s41598-024-60277-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/21/2024] [Indexed: 04/28/2024] Open
Abstract
Brain-computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant's voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.
Collapse
Affiliation(s)
- Miguel Angrick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Daniel N Candrea
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Samyak Shah
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Griffin W Milsap
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - William S Anderson
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Chad R Gordon
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Kathryn R Rosenblatt
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lora Clawson
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Donna C Tippett
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicholas Maragakis
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francesco V Tenore
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Matthew S Fifer
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Hynek Hermansky
- Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA
- Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
9
|
Anastasopoulou I, Cheyne DO, van Lieshout P, Johnson BW. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography. Front Hum Neurosci 2024; 18:1305058. [PMID: 38646159 PMCID: PMC11027130 DOI: 10.3389/fnhum.2024.1305058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2023] [Accepted: 02/26/2024] [Indexed: 04/23/2024] Open
Abstract
Introduction Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/ and /api/, produced at normal and faster rates. Results The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higher-frequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system.
Collapse
Affiliation(s)
| | - Douglas Owen Cheyne
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Hospital for Sick Children Research Institute, Toronto, ON, Canada
| | - Pascal van Lieshout
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| | | |
Collapse
|
10
|
Yang H, Kuang M, Qiu J, He S, Yu C, Sheng G, Zou Y. Relative importance of triglyceride glucose index combined with body mass index in predicting recovery from prediabetic state to normal fasting glucose: a cohort analysis based on a Chinese physical examination population. Lipids Health Dis 2024; 23:71. [PMID: 38459527 PMCID: PMC10921811 DOI: 10.1186/s12944-024-02060-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2024] [Accepted: 02/27/2024] [Indexed: 03/10/2024] Open
Abstract
BACKGROUND Prediabetes is a high-risk state for diabetes, and numerous studies have shown that the body mass index (BMI) and triglyceride-glucose (TyG) index play significant roles in risk prediction for blood glucose metabolism. This study aims to evaluate the relative importance of BMI combination with TyG index (TyG-BMI) in predicting the recovery from prediabetic status to normal blood glucose levels. METHODS A total of 25,397 prediabetic subjects recruited from 32 regions across China. Normal fasting glucose (NFG), prediabetes, and diabetes were defined referring to the American Diabetes Association (ADA) criteria. After normalizing the independent variables, the impact of TyG-BMI on the recovery or progression of prediabetes was analyzed through the Cox regression models. Receiver Operating Characteristic (ROC) curve analysis was utilized to visualize and compare the predictive value of TyG-BMI and its constituent components in prediabetes recovery/progression. RESULTS During the average observation period of 2.96 years, 10,305 individuals (40.58%) remained in the prediabetic state, 11,278 individuals (44.41%) recovered to NFG, and 3,814 individuals (15.02%) progressed to diabetes. The results of multivariate Cox regression analysis demonstrated that TyG-BMI was negatively associated with recovery from prediabetes to NFG and positively associated with progression from prediabetes to diabetes. Further ROC analysis revealed that TyG-BMI had higher impact and predictive value in predicting prediabetes recovering to NFG or progressing to diabetes in comparison to the TyG index and BMI. Specifically, the TyG-BMI threshold for predicting prediabetes recovery was 214.68, while the threshold for predicting prediabetes progression was 220.27. Additionally, there were significant differences in the relationship of TyG-BMI with prediabetes recovering to NFG or progressing to diabetes within age subgroups. In summary, TyG-BMI is more suitable for assessing prediabetes recovery or progression in younger populations (< 45 years old). CONCLUSIONS This study, for the first time, has revealed the significant impact and predictive value of the TyG index in combination with BMI on the recovery from prediabetic status to normal blood glucose levels. From the perspective of prediabetes intervention, maintaining TyG-BMI within the threshold of 214.68 holds crucial significance.
Collapse
Affiliation(s)
- Hongyi Yang
- Department of Ultrasound, the Second Affiliated Hospital of Nanchang University, Nanchang, Jiangxi Province, China
| | - Maobin Kuang
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Jiajun Qiu
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Shiming He
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Changhui Yu
- Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China
| | - Guotai Sheng
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China.
| | - Yang Zou
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, China.
| |
Collapse
|
11
|
Nie JZ, Flint RD, Prakash P, Hsieh JK, Mugler EM, Tate MC, Rosenow JM, Slutzky MW. High-Gamma Activity Is Coupled to Low-Gamma Oscillations in Precentral Cortices and Modulates with Movement and Speech. eNeuro 2024; 11:ENEURO.0163-23.2023. [PMID: 38242691 PMCID: PMC10867721 DOI: 10.1523/eneuro.0163-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 10/26/2023] [Accepted: 12/06/2023] [Indexed: 01/21/2024] Open
Abstract
Planning and executing motor behaviors requires coordinated neural activity among multiple cortical and subcortical regions of the brain. Phase-amplitude coupling between the high-gamma band amplitude and the phase of low frequency oscillations (theta, alpha, beta) has been proposed to reflect neural communication, as has synchronization of low-gamma oscillations. However, coupling between low-gamma and high-gamma bands has not been investigated. Here, we measured phase-amplitude coupling between low- and high-gamma in monkeys performing a reaching task and in humans either performing finger-flexion or word-reading tasks. We found significant coupling between low-gamma phase and high-gamma amplitude in multiple sensorimotor and premotor cortices of both species during all tasks. This coupling modulated with the onset of movement. These findings suggest that interactions between the low and high gamma bands are markers of network dynamics related to movement and speech generation.
Collapse
Affiliation(s)
- Jeffrey Z Nie
- Southern Illinois University School of Medicine, Springfield 62794, Illinois
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
| | - Robert D Flint
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
| | - Prashanth Prakash
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
| | - Jason K Hsieh
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
- Neurological Surgery, Northwestern University, Chicago 60611, Illinois
- Department of Neurosurgery, Neurological Institute, Cleveland Clinic Foundation, Cleveland, Ohio
| | - Emily M Mugler
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
| | - Matthew C Tate
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
- Neurological Surgery, Northwestern University, Chicago 60611, Illinois
| | - Joshua M Rosenow
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
- Neurological Surgery, Northwestern University, Chicago 60611, Illinois
- Physical Medicine & Rehabilitation, Northwestern University, Chicago 60611, Illinois
- Shirley Ryan AbilityLab, Chicago 60611, Illinois
| | - Marc W Slutzky
- Departments of Neurology, Northwestern University, Chicago 60611, Illinois
- Physical Medicine & Rehabilitation, Northwestern University, Chicago 60611, Illinois
- Neuroscience, Northwestern University, Chicago 60611, Illinois
- Shirley Ryan AbilityLab, Chicago 60611, Illinois
- Department of Biomedical Engineering, Northwestern University, Evanston 60201, Illinois
| |
Collapse
|
12
|
Lorca-Puls DL, Gajardo-Vidal A, Mandelli ML, Illán-Gala I, Ezzes Z, Wauters LD, Battistella G, Bogley R, Ratnasiri B, Licata AE, Battista P, García AM, Tee BL, Lukic S, Boxer AL, Rosen HJ, Seeley WW, Grinberg LT, Spina S, Miller BL, Miller ZA, Henry ML, Dronkers NF, Gorno-Tempini ML. Neural basis of speech and grammar symptoms in non-fluent variant primary progressive aphasia spectrum. Brain 2024; 147:607-626. [PMID: 37769652 PMCID: PMC10834255 DOI: 10.1093/brain/awad327] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 07/28/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
The non-fluent/agrammatic variant of primary progressive aphasia (nfvPPA) is a neurodegenerative syndrome primarily defined by the presence of apraxia of speech (AoS) and/or expressive agrammatism. In addition, many patients exhibit dysarthria and/or receptive agrammatism. This leads to substantial phenotypic variation within the speech-language domain across individuals and time, in terms of both the specific combination of symptoms as well as their severity. How to resolve such phenotypic heterogeneity in nfvPPA is a matter of debate. 'Splitting' views propose separate clinical entities: 'primary progressive apraxia of speech' when AoS occurs in the absence of expressive agrammatism, 'progressive agrammatic aphasia' (PAA) in the opposite case, and 'AOS + PAA' when mixed motor speech and language symptoms are clearly present. While therapeutic interventions typically vary depending on the predominant symptom (e.g. AoS versus expressive agrammatism), the existence of behavioural, anatomical and pathological overlap across these phenotypes argues against drawing such clear-cut boundaries. In the current study, we contribute to this debate by mapping behaviour to brain in a large, prospective cohort of well characterized patients with nfvPPA (n = 104). We sought to advance scientific understanding of nfvPPA and the neural basis of speech-language by uncovering where in the brain the degree of MRI-based atrophy is associated with inter-patient variability in the presence and severity of AoS, dysarthria, expressive agrammatism or receptive agrammatism. Our cross-sectional examination of brain-behaviour relationships revealed three main observations. First, we found that the neural correlates of AoS and expressive agrammatism in nfvPPA lie side by side in the left posterior inferior frontal lobe, explaining their behavioural dissociation/association in previous reports. Second, we identified a 'left-right' and 'ventral-dorsal' neuroanatomical distinction between AoS versus dysarthria, highlighting (i) that dysarthria, but not AoS, is significantly influenced by tissue loss in right-hemisphere motor-speech regions; and (ii) that, within the left hemisphere, dysarthria and AoS map onto dorsally versus ventrally located motor-speech regions, respectively. Third, we confirmed that, within the large-scale grammar network, left frontal tissue loss is preferentially involved in expressive agrammatism and left temporal tissue loss in receptive agrammatism. Our findings thus contribute to define the function and location of the epicentres within the large-scale neural networks vulnerable to neurodegenerative changes in nfvPPA. We propose that nfvPPA be redefined as an umbrella term subsuming a spectrum of speech and/or language phenotypes that are closely linked by the underlying neuroanatomy and neuropathology.
Collapse
Affiliation(s)
- Diego L Lorca-Puls
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Sección de Neurología, Departamento de Especialidades, Facultad de Medicina, Universidad de Concepción, Concepción, 4070105, Chile
| | - Andrea Gajardo-Vidal
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Santiago, 7590943, Chile
- Dirección de Investigación y Doctorados, Vicerrectoría de Investigación y Doctorados, Universidad del Desarrollo, Concepción, 4070001, Chile
| | - Maria Luisa Mandelli
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Ignacio Illán-Gala
- Sant Pau Memory Unit, Department of Neurology, Biomedical Research Institute Sant Pau, Hospital de la Santa Creu i Sant Pau, Universitat Autònoma de Barcelona, Barcelona, 08025, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Neurodegenerativas (CIBERNED), Madrid, 28029, Spain
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Zoe Ezzes
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Lisa D Wauters
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Speech, Language and Hearing Sciences, University of Texas, Austin, TX 78712-0114, USA
| | - Giovanni Battistella
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Otolaryngology, Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA 02114, USA
| | - Rian Bogley
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Buddhika Ratnasiri
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Abigail E Licata
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Petronilla Battista
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Laboratory of Neuropsychology, Istituti Clinici Scientifici Maugeri IRCCS, Bari, 70124, Italy
| | - Adolfo M García
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Centro de Neurociencias Cognitivas, Universidad de San Andrés, Buenos Aires, B1644BID, Argentina
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, 9160000, Chile
| | - Boon Lead Tee
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Sladjana Lukic
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Communication Sciences and Disorders, Ruth S. Ammon College of Education and Health Sciences, Adelphi University, Garden City, NY 11530-0701, USA
| | - Adam L Boxer
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Howard J Rosen
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - William W Seeley
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Pathology, University of California San Francisco, San Francisco, CA 94143, USA
| | - Lea T Grinberg
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Department of Pathology, University of California San Francisco, San Francisco, CA 94143, USA
| | - Salvatore Spina
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Zachary A Miller
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Maya L Henry
- Department of Speech, Language and Hearing Sciences, University of Texas, Austin, TX 78712-0114, USA
- Department of Neurology, Dell Medical School, University of Texas, Austin, TX 78712, USA
| | - Nina F Dronkers
- Department of Psychology, University of California, Berkeley, CA 94720, USA
- Department of Neurology, University of California, Davis, CA 95817, USA
| | - Maria Luisa Gorno-Tempini
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| |
Collapse
|
13
|
Yang H, Kuang M, Yang R, Xie G, Sheng G, Zou Y. Evaluation of the role of atherogenic index of plasma in the reversion from Prediabetes to normoglycemia or progression to Diabetes: a multi-center retrospective cohort study. Cardiovasc Diabetol 2024; 23:17. [PMID: 38184569 PMCID: PMC10771677 DOI: 10.1186/s12933-023-02108-8] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 12/28/2023] [Indexed: 01/08/2024] Open
Abstract
BACKGROUND Atherosclerosis is closely linked with glucose metabolism. We aimed to investigate the role of the atherogenic index of plasma (AIP) in the reversal of prediabetes to normal blood glucose levels or its progression to diabetes. METHODS This multi-center retrospective cohort study included 15,421 prediabetic participants from 32 regions across 11 cities in China, under the aegis of the Rich Healthcare Group's affiliated medical examination institutions. Throughout the follow-up period, we monitored changes in the glycemic status of these participants, including reversal to normal fasting glucose (NFG), persistence in the prediabetic state, or progression to diabetes. Segmented regression, stratified analysis, and restricted cubic spline (RCS) were performed based on the multivariable Cox regression model to evaluate the association between AIP and the reversal of prediabetes to NFG or progression to diabetes. RESULTS During a median follow-up period of 2.9 years, we recorded 6,481 individuals (42.03%) reverting from prediabetes to NFG, and 2,424 individuals (15.72%) progressing to diabetes. After adjusting for confounders, AIP showed a positive correlation with the progression from prediabetes to diabetes [(Hazard ratio (HR) 1.42, 95% confidence interval (CI):1.24-1.64)] and a negative correlation with the reversion from prediabetes to NFG (HR 0.89, 95%CI:0.81-0.98); further RCS demonstrated a nonlinear relationship between AIP and the reversion from prediabetes to NFG/progression to diabetes, identifying a turning point of 0.04 for reversion to NFG and 0.17 for progression to diabetes. In addition, we observed significant differences in the association between AIP and reversion from prediabetes to NFG/progression to diabetes across age subgroups, specifically indicating that the risk associated with AIP for progression from prediabetes to diabetes was relatively higher in younger populations; likewise, a younger age within the adult group favored the reversion from prediabetes to NFG in relation to AIP. CONCLUSION Our study, for the first time, reveals a negative correlation between AIP and the reversion from prediabetes to normoglycemia and validates the crucial role of AIP in the risk assessment of prediabetes progression. Based on threshold analysis, therapeutically, keeping the AIP below 0.04 was of paramount importance for individuals with prediabetes aiming for reversion to NFG; preventatively, maintaining AIP below 0.17 was vital to reduce the risk of diabetes onset for those with prediabetes.
Collapse
Affiliation(s)
- Hongyi Yang
- Department of Ultrasound, the Second Affiliated Hospital of Nanchang University, Nanchang, Jiangxi Province, 330006, P.R. China
| | - Maobin Kuang
- Department of Internal Medicine, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, 330006, P.R. China
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, 330006, P.R. China
| | - Ruijuan Yang
- Department of Internal Medicine, Jiangxi Medical College, Nanchang University, Nanchang, Jiangxi Province, 330006, P.R. China
- Department of Endocrinology, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, 330006, P.R. China
| | - Guobo Xie
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, 330006, P.R. China
| | - Guotai Sheng
- Jiangxi Provincial Geriatric Hospital, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, 330006, P.R. China
| | - Yang Zou
- Jiangxi Cardiovascular Research Institute, Jiangxi Provincial People's Hospital, The First Affiliated Hospital of Nanchang Medical College, Nanchang, Jiangxi Province, 330006, P.R. China.
| |
Collapse
|
14
|
Castellucci GA, Kovach CK, Tabasi F, Christianson D, Greenlee JD, Long MA. A frontal cortical network is critical for language planning during spoken interaction. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.26.554639. [PMID: 37693383 PMCID: PMC10491113 DOI: 10.1101/2023.08.26.554639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Many brain areas exhibit activity correlated with language planning, but the impact of these dynamics on spoken interaction remains unclear. Here we use direct electrical stimulation to transiently perturb cortical function in neurosurgical patient-volunteers performing a question-answer task. Stimulating structures involved in speech motor function evoked diverse articulatory deficits, while perturbations of caudal inferior and middle frontal gyri - which exhibit preparatory activity during conversational turn-taking - led to response errors. Perturbation of the same planning-related frontal regions slowed inter-speaker timing, while faster responses could result from stimulation of sites located in other areas. Taken together, these findings further indicate that caudal inferior and middle frontal gyri constitute a critical planning network essential for interactive language use.
Collapse
|
15
|
Thomas TM, Singh A, Bullock LP, Liang D, Morse CW, Scherschligt X, Seymour JP, Tandon N. Decoding articulatory and phonetic components of naturalistic continuous speech from the distributed language network. J Neural Eng 2023; 20:046030. [PMID: 37487487 DOI: 10.1088/1741-2552/ace9fb] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 07/24/2023] [Indexed: 07/26/2023]
Abstract
Objective.The speech production network relies on a widely distributed brain network. However, research and development of speech brain-computer interfaces (speech-BCIs) has typically focused on decoding speech only from superficial subregions readily accessible by subdural grid arrays-typically placed over the sensorimotor cortex. Alternatively, the technique of stereo-electroencephalography (sEEG) enables access to distributed brain regions using multiple depth electrodes with lower surgical risks, especially in patients with brain injuries resulting in aphasia and other speech disorders.Approach.To investigate the decoding potential of widespread electrode coverage in multiple cortical sites, we used a naturalistic continuous speech production task. We obtained neural recordings using sEEG from eight participants while they read aloud sentences. We trained linear classifiers to decode distinct speech components (articulatory components and phonemes) solely based on broadband gamma activity and evaluated the decoding performance using nested five-fold cross-validation.Main Results.We achieved an average classification accuracy of 18.7% across 9 places of articulation (e.g. bilabials, palatals), 26.5% across 5 manner of articulation (MOA) labels (e.g. affricates, fricatives), and 4.81% across 38 phonemes. The highest classification accuracies achieved with a single large dataset were 26.3% for place of articulation, 35.7% for MOA, and 9.88% for phonemes. Electrodes that contributed high decoding power were distributed across multiple sulcal and gyral sites in both dominant and non-dominant hemispheres, including ventral sensorimotor, inferior frontal, superior temporal, and fusiform cortices. Rather than finding a distinct cortical locus for each speech component, we observed neural correlates of both articulatory and phonetic components in multiple hubs of a widespread language production network.Significance.These results reveal the distributed cortical representations whose activity can enable decoding speech components during continuous speech through the use of this minimally invasive recording method, elucidating language neurobiology and neural targets for future speech-BCIs.
Collapse
Affiliation(s)
- Tessy M Thomas
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Aditya Singh
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Latané P Bullock
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Daniel Liang
- Department of Computer Science, Rice University, Houston, TX 77005, United States of America
| | - Cale W Morse
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - Xavier Scherschligt
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
| | - John P Seymour
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Department of Electrical & Computer Engineering, Rice University, Houston, TX 77005, United States of America
| | - Nitin Tandon
- Vivian L. Smith Department of Neurosurgery, McGovern Medical School, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Texas Institute for Restorative Neurotechnologies, University of Texas Health Science Center at Houston, Houston, TX 77030, United States of America
- Memorial Hermann Hospital, Texas Medical Center, Houston, TX 77030, United States of America
| |
Collapse
|
16
|
Angrick M, Luo S, Rabbani Q, Candrea DN, Shah S, Milsap GW, Anderson WS, Gordon CR, Rosenblatt KR, Clawson L, Maragakis N, Tenore FV, Fifer MS, Hermansky H, Ramsey NF, Crone NE. Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS. MEDRXIV : THE PREPRINT SERVER FOR HEALTH SCIENCES 2023:2023.06.30.23291352. [PMID: 37425721 PMCID: PMC10327279 DOI: 10.1101/2023.06.30.23291352] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/11/2023]
Abstract
Recent studies have shown that speech can be reconstructed and synthesized using only brain activity recorded with intracranial electrodes, but until now this has only been done using retrospective analyses of recordings from able-bodied patients temporarily implanted with electrodes for epilepsy surgery. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a clinical trial participant (ClinicalTrials.gov, NCT03567213) with dysarthria due to amyotrophic lateral sclerosis (ALS). We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the user from a vocabulary of 6 keywords originally designed to allow intuitive selection of items on a communication board. Our results show for the first time that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words that are intelligible to human listeners while preserving the participants voice profile.
Collapse
Affiliation(s)
- Miguel Angrick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Daniel N Candrea
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Samyak Shah
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Griffin W Milsap
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - William S Anderson
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD
| | - Chad R Gordon
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD
- Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Kathryn R Rosenblatt
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lora Clawson
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicholas Maragakis
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francesco V Tenore
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Matthew S Fifer
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Hynek Hermansky
- Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA
- Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
17
|
Soroush PZ, Herff C, Ries SK, Shih JJ, Schultz T, Krusienski DJ. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. Neuroimage 2023; 269:119913. [PMID: 36731812 DOI: 10.1016/j.neuroimage.2023.119913] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/05/2023] [Accepted: 01/29/2023] [Indexed: 02/01/2023] Open
Abstract
Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Collapse
|
18
|
Nie JZ, Flint RD, Prakash P, Hsieh JK, Mugler EM, Tate MC, Rosenow JM, Slutzky MW. High-gamma activity is coupled to low-gamma oscillations in precentral cortices and modulates with movement and speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.02.13.528325. [PMID: 36824850 PMCID: PMC9949043 DOI: 10.1101/2023.02.13.528325] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
Abstract
Planning and executing motor behaviors requires coordinated neural activity among multiple cortical and subcortical regions of the brain. Phase-amplitude coupling between the high-gamma band amplitude and the phase of low frequency oscillations (theta, alpha, beta) has been proposed to reflect neural communication, as has synchronization of low-gamma oscillations. However, coupling between low-gamma and high-gamma bands has not been investigated. Here, we measured phase-amplitude coupling between low- and high-gamma in monkeys performing a reaching task and in humans either performing finger movements or speaking words aloud. We found significant coupling between low-gamma phase and high-gamma amplitude in multiple sensorimotor and premotor cortices of both species during all tasks. This coupling modulated with the onset of movement. These findings suggest that interactions between the low and high gamma bands are markers of network dynamics related to movement and speech generation.
Collapse
|
19
|
Verwoert M, Ottenhoff MC, Goulis S, Colon AJ, Wagner L, Tousseyn S, van Dijk JP, Kubben PL, Herff C. Dataset of Speech Production in intracranial.Electroencephalography. Sci Data 2022; 9:434. [PMID: 35869138 PMCID: PMC9307753 DOI: 10.1038/s41597-022-01542-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 07/08/2022] [Indexed: 11/28/2022] Open
Abstract
Speech production is an intricate process involving a large number of muscles and cognitive processes. The neural processes underlying speech production are not completely understood. As speech is a uniquely human ability, it can not be investigated in animal models. High-fidelity human data can only be obtained in clinical settings and is therefore not easily available to all researchers. Here, we provide a dataset of 10 participants reading out individual words while we measured intracranial EEG from a total of 1103 electrodes. The data, with its high temporal resolution and coverage of a large variety of cortical and sub-cortical brain regions, can help in understanding the speech production process better. Simultaneously, the data can be used to test speech decoding and synthesis approaches from neural data to develop speech Brain-Computer Interfaces and speech neuroprostheses. Measurement(s) | Brain activity | Technology Type(s) | Stereotactic electroencephalography | Sample Characteristic - Organism | Homo sapiens | Sample Characteristic - Environment | Epilepsy monitoring center | Sample Characteristic - Location | The Netherlands |
Collapse
|
20
|
Cooney C, Folli R, Coyle D. Opportunities, pitfalls and trade-offs in designing protocols for measuring the neural correlates of speech. Neurosci Biobehav Rev 2022; 140:104783. [PMID: 35907491 DOI: 10.1016/j.neubiorev.2022.104783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/25/2022]
Abstract
Decoding speech and speech-related processes directly from the human brain has intensified in studies over recent years as such a decoder has the potential to positively impact people with limited communication capacity due to disease or injury. Additionally, it can present entirely new forms of human-computer interaction and human-machine communication in general and facilitate better neuroscientific understanding of speech processes. Here, we synthesize the literature on neural speech decoding pertaining to how speech decoding experiments have been conducted, coalescing around a necessity for thoughtful experimental design aimed at specific research goals, and robust procedures for evaluating speech decoding paradigms. We examine the use of different modalities for presenting stimuli to participants, methods for construction of paradigms including timings and speech rhythms, and possible linguistic considerations. In addition, novel methods for eliciting naturalistic speech and validating imagined speech task performance in experimental settings are presented based on recent research. We also describe the multitude of terms used to instruct participants on how to produce imagined speech during experiments and propose methods for investigating the effect of these terms on imagined speech decoding. We demonstrate that the range of experimental procedures used in neural speech decoding studies can have unintended consequences which can impact upon the efficacy of the knowledge obtained. The review delineates the strengths and weaknesses of present approaches and poses methodological advances which we anticipate will enhance experimental design, and progress toward the optimal design of movement independent direct speech brain-computer interfaces.
Collapse
Affiliation(s)
- Ciaran Cooney
- Intelligent Systems Research Centre, Ulster University, Derry, UK.
| | - Raffaella Folli
- Institute for Research in Social Sciences, Ulster University, Jordanstown, UK
| | - Damien Coyle
- Intelligent Systems Research Centre, Ulster University, Derry, UK
| |
Collapse
|
21
|
Favero P, Berezutskaya J, Ramsey NF, Nazarov A, Freudenburg ZV. Mapping Acoustics to Articulatory Gestures in Dutch: Relating Speech Gestures, Acoustics and Neural Data. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:802-806. [PMID: 36085697 DOI: 10.1109/embc48229.2022.9871909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Completely locked-in patients suffer from paralysis affecting every muscle in their body, reducing their communication means to brain-computer interfaces (BCIs). State-of-the-art BCIs have a slow spelling rate, which inevitably places a burden on patients' quality of life. Novel techniques address this problem by following a bio-mimetic approach, which consists of decoding sensory-motor cortex (SMC) activity that underlies the movements of the vocal tract's articulators. As recording articulatory data in combination with neural recordings is often unfeasible, the goal of this study was to develop an acoustic-to-articulatory inversion (AAI) model, i.e. an algorithm that generates articulatory data (speech gestures) from acoustics. A fully convolutional neural network was trained to solve the AAI mapping, and was tested on an unseen acoustic set, recorded simultaneously with neural data. Representational similarity analysis was then used to assess the relationship between predicted gestures and neural responses. The network's predictions and targets were significantly correlated. Moreover, SMC neural activity was correlated to the vocal tract gestural dynamics. The present AAI model has the potential to further our understanding of the relationship between neural, gestural and acoustic signals and lay the foundations for the development of a bio-mimetic speech BCI. Clinical Relevance- This study investigates the relationship between articulatory gestures during speech and the underlying neural activity. The topic is central for development of brain-computer interfaces for severely paralysed individuals.
Collapse
|
22
|
Zhang L, Du Y. Lip movements enhance speech representations and effective connectivity in auditory dorsal stream. Neuroimage 2022; 257:119311. [PMID: 35589000 DOI: 10.1016/j.neuroimage.2022.119311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China 200031; Chinese Institute for Brain Research, Beijing, China 102206.
| |
Collapse
|
23
|
Li A, Yang R, Qu J, Dong J, Gu L, Mei L. Neural representation of phonological information during Chinese character reading. Hum Brain Mapp 2022; 43:4013-4029. [PMID: 35545935 PMCID: PMC9374885 DOI: 10.1002/hbm.25900] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Revised: 04/04/2022] [Accepted: 04/26/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies have revealed that phonological processing of Chinese characters elicited activation in the left prefrontal cortex, bilateral parietal cortex, and occipitotemporal regions. However, it is controversial what role the left middle frontal gyrus plays in Chinese character reading, and whether the core regions (e.g., the left superior temporal gyrus and supramarginal gyrus) for phonological processing of alphabetic languages are also involved in Chinese character reading. To address these questions, the present study used both univariate and multivariate analysis (i.e., representational similarity analysis, RSA) to explore neural representations of phonological information during Chinese character reading. Participants were scanned while performing a reading aloud task. Univariate activation analysis revealed a widely distributed network for word reading, including the bilateral inferior frontal gyrus, middle frontal gyrus, lateral temporal cortex, and occipitotemporal cortex. More importantly, RSA showed that the left prefrontal (i.e., the left middle frontal gyrus and left inferior frontal gyrus) and bilateral occipitotemporal areas (i.e., the left inferior and middle temporal gyrus and bilateral fusiform gyrus) represented phonological information of Chinese characters. These results confirmed the importance of the left middle frontal gyrus and regions in ventral pathway in representing phonological information of Chinese characters.
Collapse
Affiliation(s)
- Aqian Li
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Rui Yang
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Jing Qu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Jie Dong
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Lala Gu
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China.,School of Psychology, South China Normal University, Guangzhou, China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, China
| | - Leilei Mei
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents, South China Normal University, Ministry of Education, Guangzhou, China
| |
Collapse
|
24
|
Bush A, Chrabaszcz A, Peterson V, Saravanan V, Dastolfo-Hromack C, Lipski WJ, Richardson RM. Differentiation of speech-induced artifacts from physiological high gamma activity in intracranial recordings. Neuroimage 2022; 250:118962. [PMID: 35121181 PMCID: PMC8922158 DOI: 10.1016/j.neuroimage.2022.118962] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 10/07/2021] [Accepted: 02/01/2022] [Indexed: 12/15/2022] Open
Abstract
There is great interest in identifying the neurophysiological underpinnings of speech production. Deep brain stimulation (DBS) surgery is unique in that it allows intracranial recordings from both cortical and subcortical regions in patients who are awake and speaking. The quality of these recordings, however, may be affected to various degrees by mechanical forces resulting from speech itself. Here we describe the presence of speech-induced artifacts in local-field potential (LFP) recordings obtained from mapping electrodes, DBS leads, and cortical electrodes. In addition to expected physiological increases in high gamma (60–200 Hz) activity during speech production, time-frequency analysis in many channels revealed a narrowband gamma component that exhibited a pattern similar to that observed in the speech audio spectrogram. This component was present to different degrees in multiple types of neural recordings. We show that this component tracks the fundamental frequency of the participant’s voice, correlates with the power spectrum of speech and has coherence with the produced speech audio. A vibration sensor attached to the stereotactic frame recorded speech-induced vibrations with the same pattern observed in the LFPs. No corresponding component was identified in any neural channel during the listening epoch of a syllable repetition task. These observations demonstrate how speech-induced vibrations can create artifacts in the primary frequency band of interest. Identifying and accounting for these artifacts is crucial for establishing the validity and reproducibility of speech-related data obtained from intracranial recordings during DBS surgery.
Collapse
Affiliation(s)
- Alan Bush
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA.
| | - Anna Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Victoria Peterson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA
| | - Varun Saravanan
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, 02139, USA
| | - Christina Dastolfo-Hromack
- University of Pittsburgh, Department of Communication Science and Disorders, Pittsburgh, PA, 15260, USA; West Virginia University, Communication Science and Disorders, WV 26506, USA
| | - Witold J Lipski
- University of Pittsburgh, Department of Neurological Surgery, Pittsburgh, PA, 15260, USA
| | - R Mark Richardson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA
| |
Collapse
|
25
|
Ylinen A, Wikman P, Leminen M, Alho K. Task-dependent cortical activations during selective attention to audiovisual speech. Brain Res 2022; 1775:147739. [PMID: 34843702 DOI: 10.1016/j.brainres.2021.147739] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Revised: 10/21/2021] [Accepted: 11/21/2021] [Indexed: 11/28/2022]
Abstract
Selective listening to speech depends on widespread networks of the brain, but how the involvement of different neural systems in speech processing is affected by factors such as the task performed by a listener and speech intelligibility remains poorly understood. We used functional magnetic resonance imaging to systematically examine the effects that performing different tasks has on neural activations during selective attention to continuous audiovisual speech in the presence of task-irrelevant speech. Participants viewed audiovisual dialogues and attended either to the semantic or the phonological content of speech, or ignored speech altogether and performed a visual control task. The tasks were factorially combined with good and poor auditory and visual speech qualities. Selective attention to speech engaged superior temporal regions and the left inferior frontal gyrus regardless of the task. Frontoparietal regions implicated in selective auditory attention to simple sounds (e.g., tones, syllables) were not engaged by the semantic task, suggesting that this network may not be not as crucial when attending to continuous speech. The medial orbitofrontal cortex, implicated in social cognition, was most activated by the semantic task. Activity levels during the phonological task in the left prefrontal, premotor, and secondary somatosensory regions had a distinct temporal profile as well as the highest overall activity, possibly relating to the role of the dorsal speech processing stream in sub-lexical processing. Our results demonstrate that the task type influences neural activations during selective attention to speech, and emphasize the importance of ecologically valid experimental designs.
Collapse
Affiliation(s)
- Artturi Ylinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland.
| | - Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Department of Neuroscience, Georgetown University, Washington D.C., USA
| | - Miika Leminen
- Analytics and Data Services, HUS Helsinki University Hospital, Helsinki, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland; Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
26
|
Luo S, Rabbani Q, Crone NE. Brain-Computer Interface: Applications to Speech Decoding and Synthesis to Augment Communication. Neurotherapeutics 2022; 19:263-273. [PMID: 35099768 PMCID: PMC9130409 DOI: 10.1007/s13311-022-01190-2] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/16/2022] [Indexed: 01/03/2023] Open
Abstract
Damage or degeneration of motor pathways necessary for speech and other movements, as in brainstem strokes or amyotrophic lateral sclerosis (ALS), can interfere with efficient communication without affecting brain structures responsible for language or cognition. In the worst-case scenario, this can result in the locked in syndrome (LIS), a condition in which individuals cannot initiate communication and can only express themselves by answering yes/no questions with eye blinks or other rudimentary movements. Existing augmentative and alternative communication (AAC) devices that rely on eye tracking can improve the quality of life for people with this condition, but brain-computer interfaces (BCIs) are also increasingly being investigated as AAC devices, particularly when eye tracking is too slow or unreliable. Moreover, with recent and ongoing advances in machine learning and neural recording technologies, BCIs may offer the only means to go beyond cursor control and text generation on a computer, to allow real-time synthesis of speech, which would arguably offer the most efficient and expressive channel for communication. The potential for BCI speech synthesis has only recently been realized because of seminal studies of the neuroanatomical and neurophysiological underpinnings of speech production using intracranial electrocorticographic (ECoG) recordings in patients undergoing epilepsy surgery. These studies have shown that cortical areas responsible for vocalization and articulation are distributed over a large area of ventral sensorimotor cortex, and that it is possible to decode speech and reconstruct its acoustics from ECoG if these areas are recorded with sufficiently dense and comprehensive electrode arrays. In this article, we review these advances, including the latest neural decoding strategies that range from deep learning models to the direct concatenation of speech units. We also discuss state-of-the-art vocoders that are integral in constructing natural-sounding audio waveforms for speech BCIs. Finally, this review outlines some of the challenges ahead in directly synthesizing speech for patients with LIS.
Collapse
Affiliation(s)
- Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| |
Collapse
|
27
|
Ekert JO, Lorca-Puls DL, Gajardo-Vidal A, Crinion JT, Hope TMH, Green DW, Price CJ. A functional dissociation of the left frontal regions that contribute to single word production tasks. Neuroimage 2021; 245:118734. [PMID: 34793955 PMCID: PMC8752962 DOI: 10.1016/j.neuroimage.2021.118734] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Revised: 10/06/2021] [Accepted: 11/14/2021] [Indexed: 11/02/2022] Open
Abstract
Controversy surrounds the interpretation of higher activation for pseudoword compared to word reading in the left precentral gyrus and pars opercularis. Specifically, does activation in these regions reflect: (1) the demands on sublexical assembly of articulatory codes, or (2) retrieval effort because the combinations of articulatory codes are unfamiliar? Using fMRI, in 84 neurologically intact participants, we addressed this issue by comparing reading and repetition of words (W) and pseudowords (P) to naming objects (O) from pictures or sounds. As objects do not provide sublexical articulatory cues, we hypothesis that retrieval effort will be greater for object naming than word repetition/reading (which benefits from both lexical and sublexical cues); while the demands on sublexical assembly will be higher for pseudoword production than object naming. We found that activation was: (i) highest for pseudoword reading [P>O&W in the visual modality] in the anterior part of the ventral precentral gyrus bordering the precentral sulcus (vPCg/vPCs), consistent with the sublexical assembly of articulatory codes; but (ii) as high for object naming as pseudoword production [P&O>W] in dorsal precentral gyrus (dPCg) and the left inferior frontal junction (IFJ), consistent with retrieval demands and cognitive control. In addition, we dissociate the response properties of vPCg/vPCs, dPCg and IFJ from other left frontal lobe regions that are activated during single word speech production. Specifically, in both auditory and visual modalities: a central part of vPCg (head and face area) was more activated for verbal than nonverbal stimuli [P&W>O]; and the pars orbitalis and inferior frontal sulcus were most activated during object naming [O>W&P]. Our findings help to resolve a previous discrepancy in the literature, dissociate three functionally distinct parts of the precentral gyrus, and refine our knowledge of the functional anatomy of speech production in the left frontal lobe.
Collapse
Affiliation(s)
- Justyna O Ekert
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom.
| | - Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Department of Speech, Language and Hearing Sciences, Faculty of Medicine, Universidad de Concepcion, Concepcion, Chile
| | - Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom; Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Jennifer T Crinion
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| | - David W Green
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, 12 Queen Square, London WC1N 3AR, United Kingdom
| |
Collapse
|
28
|
Deng X, Wang B, Zong F, Yin H, Yu S, Zhang D, Wang S, Cao Y, Zhao J, Zhang Y. Right-hemispheric language reorganization in patients with brain arteriovenous malformations: A functional magnetic resonance imaging study. Hum Brain Mapp 2021; 42:6014-6027. [PMID: 34582074 PMCID: PMC8596961 DOI: 10.1002/hbm.25666] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2021] [Revised: 08/29/2021] [Accepted: 09/12/2021] [Indexed: 11/09/2022] Open
Abstract
Brain arteriovenous malformation (AVM), a presumed congenital lesion, may involve traditional language areas but usually does not lead to language dysfunction unless it ruptures. The objective of this research was to study right-hemispheric language reorganization patterns in patients with brain AVMs using functional magnetic resonance imaging (fMRI). We prospectively enrolled 30 AVM patients with lesions involving language areas and 32 age- and sex-matched healthy controls. Each subject underwent fMRI during three language tasks: visual synonym judgment, oral word reading, and auditory sentence comprehension. The activation differences between the AVM and control groups were investigated by voxelwise analysis. Lateralization indices (LIs) for the frontal lobe, temporal lobe, and cerebellum were compared between the two groups, respectively. Results suggested that the language functions of AVM patients and controls were all normal. Voxelwise analysis showed no significantly different activations between the two groups in visual synonym judgment and oral word reading tasks. In auditory sentence comprehension task, AVM patients had significantly more activations in the right precentral gyrus (BA 6) and right cerebellar lobule VI (AAL 9042). According to the LI results, the frontal lobe in oral word reading task and the temporal lobe in auditory sentence comprehension task were significantly more right-lateralized in the AVM group. These findings suggest that for patients with AVMs involving language cortex, different language reorganization patterns may develop for different language functions. The recruitment of brain areas in the right cerebral and cerebellar hemispheres may play a compensatory role in the reorganized language network of AVM patients.
Collapse
Affiliation(s)
- Xiaofeng Deng
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Bo Wang
- Hefei Comprehensive National Science Center, Institute of Artificial Intelligence, Hefei, China.,State Key Laboratory of Brain and Cognitive Science, Beijing MRI Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Fangrong Zong
- State Key Laboratory of Brain and Cognitive Science, Beijing MRI Center for Brain Research, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China.,University of Chinese Academy of Sciences, Beijing, China
| | - Hu Yin
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Shaochen Yu
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Dong Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Shuo Wang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yong Cao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Jizong Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| | - Yan Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, Beijing, China.,China National Clinical Research Center for Neurological Diseases, Beijing, China
| |
Collapse
|
29
|
Lorca-Puls DL, Gajardo-Vidal A, Oberhuber M, Prejawa S, Hope TMH, Leff AP, Green DW, Price CJ. Brain regions that support accurate speech production after damage to Broca's area. Brain Commun 2021; 3:fcab230. [PMID: 34671727 PMCID: PMC8523882 DOI: 10.1093/braincomms/fcab230] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 09/03/2021] [Accepted: 09/13/2021] [Indexed: 11/13/2022] Open
Abstract
Broca’s area in the posterior half of the left inferior frontal gyrus has traditionally been considered an important node in the speech production network. Nevertheless, recovery of speech production has been reported, to different degrees, within a few months of damage to Broca’s area. Importantly, contemporary evidence suggests that, within Broca’s area, its posterior part (i.e. pars opercularis) plays a more prominent role in speech production than its anterior part (i.e. pars triangularis). In this study, we therefore investigated the brain activation patterns that underlie accurate speech production following stroke damage to the opercular part of Broca’s area. By combining functional MRI and 13 tasks that place varying demands on speech production, brain activation was compared in (i) seven patients of interest with damage to the opercular part of Broca’s area; (ii) 55 neurologically intact controls; and (iii) 28 patient controls with left-hemisphere damage that spared Broca’s area. When producing accurate overt speech responses, the patients with damage to the left pars opercularis activated a substantial portion of the normal bilaterally distributed system. Within this system, there was a lesion-site-dependent effect in a specific part of the right cerebellar Crus I where activation was significantly higher in the patients with damage to the left pars opercularis compared to both neurologically intact and patient controls. In addition, activation in the right pars opercularis was significantly higher in the patients with damage to the left pars opercularis relative to neurologically intact controls but not patient controls (after adjusting for differences in lesion size). By further examining how right Crus I and right pars opercularis responded across a range of conditions in the neurologically intact controls, we suggest that these regions play distinct roles in domain-general cognitive control. Finally, we show that enhanced activation in the right pars opercularis cannot be explained by release from an inhibitory relationship with the left pars opercularis (i.e. dis-inhibition) because right pars opercularis activation was positively related to left pars opercularis activation in neurologically intact controls. Our findings motivate and guide future studies to investigate (i) how exactly right Crus I and right pars opercularis support accurate speech production after damage to the opercular part of Broca’s area and (ii) whether non-invasive neurostimulation to one or both of these regions boosts speech production recovery after damage to the opercular part of Broca’s area.
Collapse
Affiliation(s)
- Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | | | - Marion Oberhuber
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Susan Prejawa
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Alexander P Leff
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - David W Green
- Department of Experimental Psychology, University College London, London, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
30
|
Gajardo-Vidal A, Lorca-Puls DL, Team P, Warner H, Pshdary B, Crinion JT, Leff AP, Hope TMH, Geva S, Seghier ML, Green DW, Bowman H, Price CJ. Damage to Broca's area does not contribute to long-term speech production outcome after stroke. Brain 2021; 144:817-832. [PMID: 33517378 PMCID: PMC8041045 DOI: 10.1093/brain/awaa460] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 10/12/2020] [Accepted: 10/22/2020] [Indexed: 02/03/2023] Open
Abstract
Broca's area in the posterior half of the left inferior frontal gyrus has long been thought to be critical for speech production. The current view is that long-term speech production outcome in patients with Broca's area damage is best explained by the combination of damage to Broca's area and neighbouring regions including the underlying white matter, which was also damaged in Paul Broca's two historic cases. Here, we dissociate the effect of damage to Broca's area from the effect of damage to surrounding areas by studying long-term speech production outcome in 134 stroke survivors with relatively circumscribed left frontal lobe lesions that spared posterior speech production areas in lateral inferior parietal and superior temporal association cortices. Collectively, these patients had varying degrees of damage to one or more of nine atlas-based grey or white matter regions: Brodmann areas 44 and 45 (together known as Broca's area), ventral premotor cortex, primary motor cortex, insula, putamen, the anterior segment of the arcuate fasciculus, uncinate fasciculus and frontal aslant tract. Spoken picture description scores from the Comprehensive Aphasia Test were used as the outcome measure. Multiple regression analyses allowed us to tease apart the contribution of other variables influencing speech production abilities such as total lesion volume and time post-stroke. We found that, in our sample of patients with left frontal damage, long-term speech production impairments (lasting beyond 3 months post-stroke) were solely predicted by the degree of damage to white matter, directly above the insula, in the vicinity of the anterior part of the arcuate fasciculus, with no contribution from the degree of damage to Broca's area (as confirmed with Bayesian statistics). The effect of white matter damage cannot be explained by a disconnection of Broca's area, because speech production scores were worse after damage to the anterior arcuate fasciculus with relative sparing of Broca's area than after damage to Broca's area with relative sparing of the anterior arcuate fasciculus. Our findings provide evidence for three novel conclusions: (i) Broca's area damage does not contribute to long-term speech production outcome after left frontal lobe strokes; (ii) persistent speech production impairments after damage to the anterior arcuate fasciculus cannot be explained by a disconnection of Broca's area; and (iii) the prior association between persistent speech production impairments and Broca's area damage can be explained by co-occurring white matter damage, above the insula, in the vicinity of the anterior part of the arcuate fasciculus.
Collapse
Affiliation(s)
- Andrea Gajardo-Vidal
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK.,Faculty of Health Sciences, Universidad del Desarrollo, Concepcion, Chile
| | - Diego L Lorca-Puls
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK.,Department of Speech, Language and Hearing Sciences, Faculty of Medicine, Universidad de Concepcion, Concepcion, Chile
| | - Ploras Team
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Holly Warner
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Bawan Pshdary
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Jennifer T Crinion
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alexander P Leff
- Institute of Cognitive Neuroscience, University College London, London, UK.,Department of Brain Repair and Rehabilitation, UCL Queen Square Institute of Neurology, London, UK
| | - Thomas M H Hope
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Sharon Geva
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| | - Mohamed L Seghier
- Cognitive Neuroimaging Unit, Emirates College for Advanced Education, Abu Dhabi, UAE.,Department of Biomedical Engineering, Khalifa University of Science and Technology, Abu Dhabi, UAE
| | - David W Green
- Department of Experimental Psychology, University College London, London, UK
| | - Howard Bowman
- Centre for Cognitive Neuroscience and Cognitive Systems and the School of Computing, University of Kent, Canterbury, UK.,School of Psychology, University of Birmingham, Birmingham, UK
| | - Cathy J Price
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, London, UK
| |
Collapse
|
31
|
Moses DA, Metzger SL, Liu JR, Anumanchipalli GK, Makin JG, Sun PF, Chartier J, Dougherty ME, Liu PM, Abrams GM, Tu-Chan A, Ganguly K, Chang EF. Neuroprosthesis for Decoding Speech in a Paralyzed Person with Anarthria. N Engl J Med 2021; 385:217-227. [PMID: 34260835 PMCID: PMC8972947 DOI: 10.1056/nejmoa2027540] [Citation(s) in RCA: 172] [Impact Index Per Article: 43.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
BACKGROUND Technology to restore the ability to communicate in paralyzed persons who cannot speak has the potential to improve autonomy and quality of life. An approach that decodes words and sentences directly from the cerebral cortical activity of such patients may represent an advancement over existing methods for assisted communication. METHODS We implanted a subdural, high-density, multielectrode array over the area of the sensorimotor cortex that controls speech in a person with anarthria (the loss of the ability to articulate speech) and spastic quadriparesis caused by a brain-stem stroke. Over the course of 48 sessions, we recorded 22 hours of cortical activity while the participant attempted to say individual words from a vocabulary set of 50 words. We used deep-learning algorithms to create computational models for the detection and classification of words from patterns in the recorded cortical activity. We applied these computational models, as well as a natural-language model that yielded next-word probabilities given the preceding words in a sequence, to decode full sentences as the participant attempted to say them. RESULTS We decoded sentences from the participant's cortical activity in real time at a median rate of 15.2 words per minute, with a median word error rate of 25.6%. In post hoc analyses, we detected 98% of the attempts by the participant to produce individual words, and we classified words with 47.1% accuracy using cortical signals that were stable throughout the 81-week study period. CONCLUSIONS In a person with anarthria and spastic quadriparesis caused by a brain-stem stroke, words and sentences were decoded directly from cortical activity during attempted speech with the use of deep-learning models and a natural-language model. (Funded by Facebook and others; ClinicalTrials.gov number, NCT03698149.).
Collapse
Affiliation(s)
- David A Moses
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Sean L Metzger
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Jessie R Liu
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Gopala K Anumanchipalli
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Joseph G Makin
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Pengfei F Sun
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Josh Chartier
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Maximilian E Dougherty
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Patricia M Liu
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Gary M Abrams
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Adelyn Tu-Chan
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Karunesh Ganguly
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| | - Edward F Chang
- From the Department of Neurological Surgery (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., M.E.D., E.F.C.), the Weill Institute for Neuroscience (D.A.M., S.L.M., J.R.L., G.K.A., J.G.M., P.F.S., J.C., K.G., E.F.C.), and the Departments of Rehabilitation Services (P.M.L.) and Neurology (G.M.A., A.T.-C., K.G.), University of California, San Francisco (UCSF), San Francisco, and the Graduate Program in Bioengineering, University of California, Berkeley-UCSF, Berkeley (S.L.M., J.R.L., E.F.C.)
| |
Collapse
|
32
|
Paek AY, Brantley JA, Evans BJ, Contreras-Vidal JL. Concerns in the Blurred Divisions between Medical and Consumer Neurotechnology. IEEE SYSTEMS JOURNAL 2021; 15:3069-3080. [PMID: 35126800 PMCID: PMC8813044 DOI: 10.1109/jsyst.2020.3032609] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Neurotechnology has traditionally been central to the diagnosis and treatment of neurological disorders. While these devices have initially been utilized in clinical and research settings, recent advancements in neurotechnology have yielded devices that are more portable, user-friendly, and less expensive. These improvements allow laypeople to monitor their brain waves and interface their brains with external devices. Such improvements have led to the rise of wearable neurotechnology that is marketed to the consumer. While many of the consumer devices are marketed for innocuous applications, such as use in video games, there is potential for them to be repurposed for medical use. How do we manage neurotechnologies that skirt the line between medical and consumer applications and what can be done to ensure consumer safety? Here, we characterize neurotechnology based on medical and consumer applications and summarize currently marketed uses of consumer-grade wearable headsets. We lay out concerns that may arise due to the similar claims associated with both medical and consumer devices, the possibility of consumer devices being repurposed for medical uses, and the potential for medical uses of neurotechnology to influence commercial markets related to employment and self-enhancement.
Collapse
Affiliation(s)
- Andrew Y Paek
- Department of Electrical & Computer Engineering and the IUCRC BRAIN Center at the University of Houston, Houston, TX, USA
| | - Justin A Brantley
- Department of Electrical & Computer Engineering and the IUCRC BRAIN Center at the University of Houston. He is now with the Department of Bioengineering at the University of Pennsylvania, Philadelphia, PA, USA
| | - Barbara J Evans
- Law Center and IUCRC BRAIN Center at the University of Houston. University of Houston, Houston, TX. She is now with the Wertheim College of Engineering and Levin College of Law at the University of Florida, Gainesville, FL, USA
| | - Jose L Contreras-Vidal
- Department of Electrical & Computer Engineering and the IUCRC BRAIN Center at the University of Houston, Houston, TX, USA
| |
Collapse
|
33
|
Yang Y, Ahmadipour P, Shanechi MM. Adaptive latent state modeling of brain network dynamics with real-time learning rate optimization. J Neural Eng 2021; 18. [PMID: 33254159 DOI: 10.1088/1741-2552/abcefd] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 11/30/2020] [Indexed: 12/29/2022]
Abstract
Objective. Dynamic latent state models are widely used to characterize the dynamics of brain network activity for various neural signal types. To date, dynamic latent state models have largely been developed for stationary brain network dynamics. However, brain network dynamics can be non-stationary for example due to learning, plasticity or recording instability. To enable modeling these non-stationarities, two problems need to be resolved. First, novel methods should be developed that can adaptively update the parameters of latent state models, which is difficult due to the state being latent. Second, new methods are needed to optimize the adaptation learning rate, which specifies how fast new neural observations update the model parameters and can significantly influence adaptation accuracy.Approach. We develop a Rate Optimized-adaptive Linear State-Space Modeling (RO-adaptive LSSM) algorithm that solves these two problems. First, to enable adaptation, we derive a computation- and memory-efficient adaptive LSSM fitting algorithm that updates the LSSM parameters recursively and in real time in the presence of the latent state. Second, we develop a real-time learning rate optimization algorithm. We use comprehensive simulations of a broad range of non-stationary brain network dynamics to validate both algorithms, which together constitute the RO-adaptive LSSM.Main results. We show that the adaptive LSSM fitting algorithm can accurately track the broad simulated non-stationary brain network dynamics. We also find that the learning rate significantly affects the LSSM fitting accuracy. Finally, we show that the real-time learning rate optimization algorithm can run in parallel with the adaptive LSSM fitting algorithm. Doing so, the combined RO-adaptive LSSM algorithm rapidly converges to the optimal learning rate and accurately tracks non-stationarities.Significance. These algorithms can be used to study time-varying neural dynamics underlying various brain functions and enhance future neurotechnologies such as brain-machine interfaces and closed-loop brain stimulation systems.
Collapse
Affiliation(s)
- Yuxiao Yang
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Parima Ahmadipour
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,These authors contributed equally to this work
| | - Maryam M Shanechi
- Ming Hsieh Department of Electrical and Computer Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States of America.,Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States of America
| |
Collapse
|
34
|
Wilson GH, Stavisky SD, Willett FR, Avansino DT, Kelemen JN, Hochberg LR, Henderson JM, Druckmann S, Shenoy KV. Decoding spoken English from intracortical electrode arrays in dorsal precentral gyrus. J Neural Eng 2020; 17:066007. [PMID: 33236720 PMCID: PMC8293867 DOI: 10.1088/1741-2552/abbfef] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
OBJECTIVE To evaluate the potential of intracortical electrode array signals for brain-computer interfaces (BCIs) to restore lost speech, we measured the performance of decoders trained to discriminate a comprehensive basis set of 39 English phonemes and to synthesize speech sounds via a neural pattern matching method. We decoded neural correlates of spoken-out-loud words in the 'hand knob' area of precentral gyrus, a step toward the eventual goal of decoding attempted speech from ventral speech areas in patients who are unable to speak. APPROACH Neural and audio data were recorded while two BrainGate2 pilot clinical trial participants, each with two chronically-implanted 96-electrode arrays, spoke 420 different words that broadly sampled English phonemes. Phoneme onsets were identified from audio recordings, and their identities were then classified from neural features consisting of each electrode's binned action potential counts or high-frequency local field potential power. Speech synthesis was performed using the 'Brain-to-Speech' pattern matching method. We also examined two potential confounds specific to decoding overt speech: acoustic contamination of neural signals and systematic differences in labeling different phonemes' onset times. MAIN RESULTS A linear decoder achieved up to 29.3% classification accuracy (chance = 6%) across 39 phonemes, while an RNN classifier achieved 33.9% accuracy. Parameter sweeps indicated that performance did not saturate when adding more electrodes or more training data, and that accuracy improved when utilizing time-varying structure in the data. Microphonic contamination and phoneme onset differences modestly increased decoding accuracy, but could be mitigated by acoustic artifact subtraction and using a neural speech onset marker, respectively. Speech synthesis achieved r = 0.523 correlation between true and reconstructed audio. SIGNIFICANCE The ability to decode speech using intracortical electrode array signals from a nontraditional speech area suggests that placing electrode arrays in ventral speech areas is a promising direction for speech BCIs.
Collapse
Affiliation(s)
- Guy H Wilson
- Neurosciences Graduate Program, Stanford University, Stanford, CA, United States of America
| | - Sergey D Stavisky
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
- Wu Tsai Neurosciences Institute and Bio-X Institute, Stanford University, Stanford, CA, United States of America
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
| | - Francis R Willett
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, United States of America
| | - Donald T Avansino
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
| | - Jessica N Kelemen
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
| | - Leigh R Hochberg
- Department of Neurology, Harvard Medical School, Boston, MA, United States of America
- Center for Neurotechnology and Neurorecovery, Dept. of Neurology, Massachusetts General Hospital, Boston, MA, United States of America
- VA RR&D Center for Neurorestoration and Neurotechnology, Rehabilitation R&D Service, Providence VA Medical Center, Providence, RI, United States of America
- Carney Institute for Brain Science and School of Engineering, Brown University, Providence, RI, United States of America
| | - Jaimie M Henderson
- Department of Neurosurgery, Stanford University, Stanford, CA, United States of America
- Wu Tsai Neurosciences Institute and Bio-X Institute, Stanford University, Stanford, CA, United States of America
| | - Shaul Druckmann
- Wu Tsai Neurosciences Institute and Bio-X Institute, Stanford University, Stanford, CA, United States of America
- Department of Neurobiology, Stanford University, Stanford, CA, United States of America
| | - Krishna V Shenoy
- Wu Tsai Neurosciences Institute and Bio-X Institute, Stanford University, Stanford, CA, United States of America
- Department of Electrical Engineering, Stanford University, Stanford, CA, United States of America
- Howard Hughes Medical Institute at Stanford University, Stanford, CA, United States of America
- Department of Neurobiology, Stanford University, Stanford, CA, United States of America
- Department of Bioengineering, Stanford University, Stanford, CA, United States of America
| |
Collapse
|
35
|
Jouen AL, Lancheros M, Laganaro M. Microstate ERP Analyses to Pinpoint the Articulatory Onset in Speech Production. Brain Topogr 2020; 34:29-40. [PMID: 33161471 PMCID: PMC7803690 DOI: 10.1007/s10548-020-00803-3] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 10/23/2020] [Indexed: 11/29/2022]
Abstract
The use of electroencephalography (EEG) to study overt speech production has increased substantially in the past 15 years and the alignment of evoked potential (ERPs) on the response onset has become an extremely useful method to target “latest” stages of speech production. Yet, response-locked ERPs raise a methodological issue: on which event should the point of alignment be placed? Response-locked ERPs are usually aligned to the vocal (acoustic) onset, although it is well known that articulatory movements may start up to a hundred milliseconds prior to the acoustic onset and that this “articulatory onset to acoustic onset interval” (AAI) depends on the phoneme properties. Given the previously reported difficulties to measure the AAI, the purpose of this study was to determine if the AAI could be reliably detected with EEG-microstates. High-density EEG was recorded during delayed speech production of monosyllabic pseudowords with four different onset consonants. Whereas the acoustic response onsets varied depending on the onset consonant, the response-locked spatiotemporal EEG analysis revealed a clear asynchrony of the same sequence of microstates across onset consonants. A specific microstate, the latest observed in the ERPs locked to the vocal onset, presented longer duration for phonemes with longer acoustic response onsets. Converging evidences seemed to confirm that this microstate may be related to the articulatory onset of motor execution: its scalp topography corresponded to those previously associated with muscle activity and source localization highlighted the involvement of motor areas. Finally, the analyses on the duration of such microstate in single trials further fit with the AAI intervals for specific phonemes reported in previous studies. These results thus suggest that a particular ERP-microstate is a reliable index of articulation onset and of the AAI.
Collapse
Affiliation(s)
- Anne-Lise Jouen
- Faculty of Psychology and Educational Science (FPSE), University of Geneva, 28 Boulevard du Pont d'Arve, 1205, Geneva, Switzerland.
| | - Monica Lancheros
- Faculty of Psychology and Educational Science (FPSE), University of Geneva, 28 Boulevard du Pont d'Arve, 1205, Geneva, Switzerland
| | - Marina Laganaro
- Faculty of Psychology and Educational Science (FPSE), University of Geneva, 28 Boulevard du Pont d'Arve, 1205, Geneva, Switzerland
| |
Collapse
|
36
|
Sun P, Anumanchipalli GK, Chang EF. Brain2Char: a deep architecture for decoding text from brain recordings. J Neural Eng 2020; 17. [PMID: 33142282 DOI: 10.1088/1741-2552/abc742] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2020] [Accepted: 11/03/2020] [Indexed: 01/08/2023]
Abstract
OBJECTIVE Decoding language representations directly from the brain can enable new Brain-Computer Interfaces (BCI) for high bandwidth human-human and human-machine communication. Clinically, such technologies can restore communication in people with neurological conditions affecting their ability to speak. APPROACH In this study, we propose a novel deep network architecture Brain2Char, for directly decoding text (specifically character sequences) from direct brain recordings (called Electrocorticography, ECoG). Brain2Char framework combines state-of-the-art deep learning modules --- 3D Inception layers for multiband spatiotemporal feature extraction from neural data and bidirectional recurrent layers, dilated convolution layers followed by language model weighted beam search to decode character sequences, optimizing a connectionist temporal classification (CTC) loss. Additionally, given the highly non-linear transformations that underlie the conversion of cortical function to character sequences, we perform regularizations on the network's latent representations motivated by insights into cortical encoding of speech production and artifactual aspects specific to ECoG data acquisition. To do this, we impose auxiliary losses on latent representations for articulatory movements, speech acoustics and session specific non-linearities. MAIN RESULTS In 3 (out of 4) participants reported here, Brain2Char achieves 10.6%, 8.5% and 7.0%Word Error Rates (WER) respectively on vocabulary sizes ranging from 1200 to 1900 words. SIGNIFICANCE These results establish a new end-to-end approach on decoding text from brain signals and demonstrate the potential of Brain2Char as a high-performance communication BCI.
Collapse
Affiliation(s)
- Pengfei Sun
- University of California San Francisco, San Francisco, California, 94143, UNITED STATES
| | | | - Edward F Chang
- University of California San Francisco, San Francisco, UNITED STATES
| |
Collapse
|
37
|
Roussel P, Godais GL, Bocquelet F, Palma M, Hongjie J, Zhang S, Giraud AL, Mégevand P, Miller K, Gehrig J, Kell C, Kahane P, Chabardés S, Yvert B. Observation and assessment of acoustic contamination of electrophysiological brain signals during speech production and sound perception. J Neural Eng 2020; 17:056028. [PMID: 33055383 DOI: 10.1088/1741-2552/abb25e] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE A current challenge of neurotechnologies is to develop speech brain-computer interfaces aiming at restoring communication in people unable to speak. To achieve a proof of concept of such system, neural activity of patients implanted for clinical reasons can be recorded while they speak. Using such simultaneously recorded audio and neural data, decoders can be built to predict speech features using features extracted from brain signals. A typical neural feature is the spectral power of field potentials in the high-gamma frequency band, which happens to overlap the frequency range of speech acoustic signals, especially the fundamental frequency of the voice. Here, we analyzed human electrocorticographic and intracortical recordings during speech production and perception as well as a rat microelectrocorticographic recording during sound perception. We observed that several datasets, recorded with different recording setups, contained spectrotemporal features highly correlated with those of the sound produced by or delivered to the participants, especially within the high-gamma band and above, strongly suggesting a contamination of electrophysiological recordings by the sound signal. This study investigated the presence of acoustic contamination and its possible source. APPROACH We developed analysis methods and a statistical criterion to objectively assess the presence or absence of contamination-specific correlations, which we used to screen several datasets from five centers worldwide. MAIN RESULTS Not all but several datasets, recorded in a variety of conditions, showed significant evidence of acoustic contamination. Three out of five centers were concerned by the phenomenon. In a recording showing high contamination, the use of high-gamma band features dramatically facilitated the performance of linear decoding of acoustic speech features, while such improvement was very limited for another recording showing no significant contamination. Further analysis and in vitro replication suggest that the contamination is caused by the mechanical action of the sound waves onto the cables and connectors along the recording chain, transforming sound vibrations into an undesired electrical noise affecting the biopotential measurements. SIGNIFICANCE Although this study does not per se question the presence of speech-relevant physiological information in the high-gamma range and above (multiunit activity), it alerts on the fact that acoustic contamination of neural signals should be proofed and eliminated before investigating the cortical dynamics of these processes. To this end, we make available a toolbox implementing the proposed statistical approach to quickly assess the extent of contamination in an electrophysiological recording (https://doi.org/10.5281/zenodo.3929296).
Collapse
Affiliation(s)
- Philémon Roussel
- Inserm, BrainTech Lab, U1205, Grenoble, France. University Grenoble Alpes, BrainTech Lab, U1205, Grenoble, France
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
38
|
Correia JM, Caballero-Gaudes C, Guediche S, Carreiras M. Phonatory and articulatory representations of speech production in cortical and subcortical fMRI responses. Sci Rep 2020; 10:4529. [PMID: 32161310 PMCID: PMC7066132 DOI: 10.1038/s41598-020-61435-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 02/24/2020] [Indexed: 11/25/2022] Open
Abstract
Speaking involves coordination of multiple neuromotor systems, including respiration, phonation and articulation. Developing non-invasive imaging methods to study how the brain controls these systems is critical for understanding the neurobiology of speech production. Recent models and animal research suggest that regions beyond the primary motor cortex (M1) help orchestrate the neuromotor control needed for speaking, including cortical and sub-cortical regions. Using contrasts between speech conditions with controlled respiratory behavior, this fMRI study investigates articulatory gestures involving the tongue, lips and velum (i.e., alveolars versus bilabials, and nasals versus orals), and phonatory gestures (i.e., voiced versus whispered speech). Multivariate pattern analysis (MVPA) was used to decode articulatory gestures in M1, cerebellum and basal ganglia. Furthermore, apart from confirming the role of a mid-M1 region for phonation, we found that a dorsal M1 region, linked to respiratory control, showed significant differences for voiced compared to whispered speech despite matched lung volume observations. This region was also functionally connected to tongue and lip M1 seed regions, underlying its importance in the coordination of speech. Our study confirms and extends current knowledge regarding the neural mechanisms underlying neuromotor speech control, which hold promise to study neural dysfunctions involved in motor-speech disorders non-invasively.
Collapse
Affiliation(s)
- Joao M Correia
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain. .,Centre for Biomedical Research (CBMR)/Department of Psychology, University of Algarve, Faro, Portugal.
| | | | - Sara Guediche
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition Brain and Language, San Sebastian, Spain.,Ikerbasque. Basque Foundation for Science, Bilbao, Spain.,University of the Basque Country. UPV/EHU, Bilbao, Spain
| |
Collapse
|
39
|
Abstract
Human brain function research has evolved dramatically in the last decades. In this chapter the role of modern methods of recording brain activity in understanding human brain function is explained. Current knowledge of brain function relevant to brain-computer interface (BCI) research is detailed, with an emphasis on the motor system which provides an exceptional level of detail to decoding of intended or attempted movements in paralyzed beneficiaries of BCI technology and translation to computer-mediated actions. BCI technologies that stand to benefit the most of the detailed organization of the human cortex are, and for the foreseeable future are likely to be, reliant on intracranial electrodes. These evolving technologies are expected to enable severely paralyzed people to regain the faculty of movement and speech in the coming decades.
Collapse
Affiliation(s)
- Nick F Ramsey
- Brain Center, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
40
|
Stavisky SD, Willett FR, Wilson GH, Murphy BA, Rezaii P, Avansino DT, Memberg WD, Miller JP, Kirsch RF, Hochberg LR, Ajiboye AB, Druckmann S, Shenoy KV, Henderson JM. Neural ensemble dynamics in dorsal motor cortex during speech in people with paralysis. eLife 2019; 8:e46015. [PMID: 31820736 PMCID: PMC6954053 DOI: 10.7554/elife.46015] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 11/14/2019] [Indexed: 01/20/2023] Open
Abstract
Speaking is a sensorimotor behavior whose neural basis is difficult to study with single neuron resolution due to the scarcity of human intracortical measurements. We used electrode arrays to record from the motor cortex 'hand knob' in two people with tetraplegia, an area not previously implicated in speech. Neurons modulated during speaking and during non-speaking movements of the tongue, lips, and jaw. This challenges whether the conventional model of a 'motor homunculus' division by major body regions extends to the single-neuron scale. Spoken words and syllables could be decoded from single trials, demonstrating the potential of intracortical recordings for brain-computer interfaces to restore speech. Two neural population dynamics features previously reported for arm movements were also present during speaking: a component that was mostly invariant across initiating different words, followed by rotatory dynamics during speaking. This suggests that common neural dynamical motifs may underlie movement of arm and speech articulators.
Collapse
Affiliation(s)
- Sergey D Stavisky
- Department of NeurosurgeryStanford UniversityStanfordUnited States
- Department of Electrical EngineeringStanford UniversityStanfordUnited States
| | - Francis R Willett
- Department of NeurosurgeryStanford UniversityStanfordUnited States
- Department of Electrical EngineeringStanford UniversityStanfordUnited States
| | - Guy H Wilson
- Neurosciences ProgramStanford UniversityStanfordUnited States
| | - Brian A Murphy
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandUnited States
- FES Center, Rehab R&D ServiceLouis Stokes Cleveland Department of Veterans Affairs Medical CenterClevelandUnited States
| | - Paymon Rezaii
- Department of NeurosurgeryStanford UniversityStanfordUnited States
| | | | - William D Memberg
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandUnited States
- FES Center, Rehab R&D ServiceLouis Stokes Cleveland Department of Veterans Affairs Medical CenterClevelandUnited States
| | - Jonathan P Miller
- FES Center, Rehab R&D ServiceLouis Stokes Cleveland Department of Veterans Affairs Medical CenterClevelandUnited States
- Department of NeurosurgeryUniversity Hospitals Cleveland Medical CenterClevelandUnited States
| | - Robert F Kirsch
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandUnited States
- FES Center, Rehab R&D ServiceLouis Stokes Cleveland Department of Veterans Affairs Medical CenterClevelandUnited States
| | - Leigh R Hochberg
- VA RR&D Center for Neurorestoration and Neurotechnology, Rehabilitation R&D ServiceProvidence VA Medical CenterProvidenceUnited States
- Center for Neurotechnology and Neurorecovery, Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonUnited States
- School of Engineering and Robert J. & Nandy D. Carney Institute for Brain ScienceBrown UniversityProvidenceUnited States
| | - A Bolu Ajiboye
- Department of Biomedical EngineeringCase Western Reserve UniversityClevelandUnited States
- FES Center, Rehab R&D ServiceLouis Stokes Cleveland Department of Veterans Affairs Medical CenterClevelandUnited States
| | - Shaul Druckmann
- Department of NeurobiologyStanford UniversityStanfordUnited States
| | - Krishna V Shenoy
- Department of Electrical EngineeringStanford UniversityStanfordUnited States
- Department of NeurobiologyStanford UniversityStanfordUnited States
- Department of BioengineeringStanford UniversityStanfordUnited States
- Howard Hughes Medical Institute, Stanford UniversityStanfordUnited States
- Wu Tsai Neurosciences InstituteStanford UniversityStanfordUnited States
- Bio-X ProgramStanford UniversityStanfordUnited States
| | - Jaimie M Henderson
- Department of NeurosurgeryStanford UniversityStanfordUnited States
- Wu Tsai Neurosciences InstituteStanford UniversityStanfordUnited States
- Bio-X ProgramStanford UniversityStanfordUnited States
| |
Collapse
|
41
|
Herff C, Diener L, Angrick M, Mugler E, Tate MC, Goldrick MA, Krusienski DJ, Slutzky MW, Schultz T. Generating Natural, Intelligible Speech From Brain Activity in Motor, Premotor, and Inferior Frontal Cortices. Front Neurosci 2019; 13:1267. [PMID: 31824257 PMCID: PMC6882773 DOI: 10.3389/fnins.2019.01267] [Citation(s) in RCA: 49] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2019] [Accepted: 11/07/2019] [Indexed: 12/17/2022] Open
Abstract
Neural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output. In our approach, which we call Brain-To-Speech, we chose subsequent units of speech based on the measured ECoG activity to generate audio waveforms directly from the neural recordings. Brain-To-Speech employed the user's own voice to generate speech that sounded very natural and included features such as prosody and accentuation. By investigating the brain areas involved in speech production separately, we found that speech motor cortex provided more information for the reconstruction process than the other cortical areas.
Collapse
Affiliation(s)
- Christian Herff
- School of Mental Health & Neuroscience, Maastricht University, Maastricht, Netherlands
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Lorenz Diener
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Miguel Angrick
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | - Emily Mugler
- Department of Neurology, Northwestern University, Chicago, IL, United States
| | - Matthew C. Tate
- Department of Neurosurgery, Northwestern University, Chicago, IL, United States
| | - Matthew A. Goldrick
- Department of Linguistics, Northwestern University, Chicago, IL, United States
| | - Dean J. Krusienski
- Biomedical Engineering Department, Virginia Commonwealth University, Richmond, VA, United States
| | - Marc W. Slutzky
- Department of Neurology, Northwestern University, Chicago, IL, United States
- Department of Physiology, Northwestern University, Chicago, IL, United States
- Department of Physical Medicine & Rehabilitation, Northwestern University, Chicago, IL, United States
| | - Tanja Schultz
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| |
Collapse
|
42
|
Tam WK, Wu T, Zhao Q, Keefer E, Yang Z. Human motor decoding from neural signals: a review. BMC Biomed Eng 2019; 1:22. [PMID: 32903354 PMCID: PMC7422484 DOI: 10.1186/s42490-019-0022-z] [Citation(s) in RCA: 38] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Accepted: 07/21/2019] [Indexed: 01/24/2023] Open
Abstract
Many people suffer from movement disability due to amputation or neurological diseases. Fortunately, with modern neurotechnology now it is possible to intercept motor control signals at various points along the neural transduction pathway and use that to drive external devices for communication or control. Here we will review the latest developments in human motor decoding. We reviewed the various strategies to decode motor intention from human and their respective advantages and challenges. Neural control signals can be intercepted at various points in the neural signal transduction pathway, including the brain (electroencephalography, electrocorticography, intracortical recordings), the nerves (peripheral nerve recordings) and the muscles (electromyography). We systematically discussed the sites of signal acquisition, available neural features, signal processing techniques and decoding algorithms in each of these potential interception points. Examples of applications and the current state-of-the-art performance were also reviewed. Although great strides have been made in human motor decoding, we are still far away from achieving naturalistic and dexterous control like our native limbs. Concerted efforts from material scientists, electrical engineers, and healthcare professionals are needed to further advance the field and make the technology widely available in clinical use.
Collapse
Affiliation(s)
- Wing-kin Tam
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Tong Wu
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| | - Qi Zhao
- Department of Computer Science and Engineering, University of Minnesota Twin Cities, 4-192 Keller Hall, 200 Union Street SE, Minnesota, 55455 USA
| | - Edward Keefer
- Nerves Incorporated, Dallas, TX P. O. Box 141295 USA
| | - Zhi Yang
- Department of Biomedical Engineering, University of Minnesota Twin Cities, 7-105 Hasselmo Hall, 312 Church St. SE, Minnesota, 55455 USA
| |
Collapse
|
43
|
Livezey JA, Bouchard KE, Chang EF. Deep learning as a tool for neural data analysis: Speech classification and cross-frequency coupling in human sensorimotor cortex. PLoS Comput Biol 2019; 15:e1007091. [PMID: 31525179 PMCID: PMC6762206 DOI: 10.1371/journal.pcbi.1007091] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Revised: 09/26/2019] [Accepted: 05/10/2019] [Indexed: 11/26/2022] Open
Abstract
A fundamental challenge in neuroscience is to understand what structure in the world is represented in spatially distributed patterns of neural activity from multiple single-trial measurements. This is often accomplished by learning a simple, linear transformations between neural features and features of the sensory stimuli or motor task. While successful in some early sensory processing areas, linear mappings are unlikely to be ideal tools for elucidating nonlinear, hierarchical representations of higher-order brain areas during complex tasks, such as the production of speech by humans. Here, we apply deep networks to predict produced speech syllables from a dataset of high gamma cortical surface electric potentials recorded from human sensorimotor cortex. We find that deep networks had higher decoding prediction accuracy compared to baseline models. Having established that deep networks extract more task relevant information from neural data sets relative to linear models (i.e., higher predictive accuracy), we next sought to demonstrate their utility as a data analysis tool for neuroscience. We first show that deep network's confusions revealed hierarchical latent structure in the neural data, which recapitulated the underlying articulatory nature of speech motor control. We next broadened the frequency features beyond high-gamma and identified a novel high-gamma-to-beta coupling during speech production. Finally, we used deep networks to compare task-relevant information in different neural frequency bands, and found that the high-gamma band contains the vast majority of information relevant for the speech prediction task, with little-to-no additional contribution from lower-frequency amplitudes. Together, these results demonstrate the utility of deep networks as a data analysis tool for basic and applied neuroscience.
Collapse
Affiliation(s)
- Jesse A. Livezey
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
| | - Kristofer E. Bouchard
- Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, California, United States of America
- Redwood Center for Theoretical Neuroscience, University of California, Berkeley, Berkeley, California, United States of America
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, California, United States of America
| | - Edward F. Chang
- Department of Neurological Surgery and Department of Physiology, University of California, San Francisco, San Francisco, California, United States of America
- Center for Integrative Neuroscience, University of California, San Francisco, San Francisco, California, United States of America
- UCSF Epilepsy Center, University of California, San Francisco, San Francisco, California, United States of America
| |
Collapse
|
44
|
Wu X, Geng Z, Zhou S, Bai T, Wei L, Ji GJ, Zhu W, Yu Y, Tian Y, Wang K. Brain Structural Correlates of Odor Identification in Mild Cognitive Impairment and Alzheimer's Disease Revealed by Magnetic Resonance Imaging and a Chinese Olfactory Identification Test. Front Neurosci 2019; 13:842. [PMID: 31474819 PMCID: PMC6702423 DOI: 10.3389/fnins.2019.00842] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2019] [Accepted: 07/26/2019] [Indexed: 11/23/2022] Open
Abstract
Alzheimer's disease (AD) is a common memory-impairment disorder frequently accompanied by olfactory identification (OI) impairments. In fact, OI is a valuable marker for distinguishing AD from normal age-related cognitive impairment and may predict the risk of mild cognitive impairment (MCI)-to-AD transition. However, current olfactory tests were developed based on Western social and cultural conditions, and are not very suitable for Chinese patients. Moreover, the neural substrate of OI in AD is still unknown. The present study investigated the utility of a newly developed Chinese smell identification test (CSIT) for OI assessment in Chinese AD and MCI patients. We then performed a correlation analysis of gray matter volume (GMV) at the voxel and region-of-interest (ROI) levels to reveal the neural substrates of OI in AD. Thirty-seven AD, 27 MCI, and 30 normal controls (NCs) completed the CSIT and MRI scans. Patients (combined AD plus MCI) scored significantly lower on the CSIT compared to NCs [F(2,91) = 62.597, p < 0.001)]. Voxel-level GMV analysis revealed strong relationships between CSIT score and volumes of the left precentral gyrus and left inferior frontal gyrus (L-IFG). In addition, ROI-level GMV analysis revealed associations between CSIT score and left amygdala volumes. Our results suggest the following: (1) OI, as measured by the CSIT, is impaired in AD and MCI patients compared with healthy controls in the Chinese population; (2) the severity of OI dysfunction can distinguish patients with cognitive impairment from controls and AD from MCI patients; and (3) the left-precentral cortex and L-IFG may be involved in the processing of olfactory cues.
Collapse
Affiliation(s)
- Xingqi Wu
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
| | - Zhi Geng
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
| | - Shanshan Zhou
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
| | - Tongjian Bai
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
| | - Ling Wei
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
| | - Gong-Jun Ji
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
- Department of Medical Psychology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Wanqiu Zhu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yongqiang Yu
- Department of Radiology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| | - Yanghua Tian
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
| | - Kai Wang
- Department of Neurology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
- Anhui Province Key Laboratory of Cognition and Neuropsychiatric Disorders, Hefei, China
- Collaborative Innovation Center of Neuropsychiatric Disorders and Mental Health, Hefei, China
- Department of Medical Psychology, The First Affiliated Hospital of Anhui Medical University, Hefei, China
| |
Collapse
|
45
|
Moses DA, Leonard MK, Makin JG, Chang EF. Real-time decoding of question-and-answer speech dialogue using human cortical activity. Nat Commun 2019; 10:3096. [PMID: 31363096 PMCID: PMC6667454 DOI: 10.1038/s41467-019-10994-4] [Citation(s) in RCA: 104] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2018] [Accepted: 06/06/2019] [Indexed: 01/15/2023] Open
Abstract
Natural communication often occurs in dialogue, differentially engaging auditory and sensorimotor brain regions during listening and speaking. However, previous attempts to decode speech directly from the human brain typically consider listening or speaking tasks in isolation. Here, human participants listened to questions and responded aloud with answers while we used high-density electrocorticography (ECoG) recordings to detect when they heard or said an utterance and to then decode the utterance's identity. Because certain answers were only plausible responses to certain questions, we could dynamically update the prior probabilities of each answer using the decoded question likelihoods as context. We decode produced and perceived utterances with accuracy rates as high as 61% and 76%, respectively (chance is 7% and 20%). Contextual integration of decoded question likelihoods significantly improves answer decoding. These results demonstrate real-time decoding of speech in an interactive, conversational setting, which has important implications for patients who are unable to communicate.
Collapse
Affiliation(s)
- David A Moses
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Matthew K Leonard
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Joseph G Makin
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA
| | - Edward F Chang
- Department of Neurological Surgery and the Center for Integrative Neuroscience at UC San Francisco, 675 Nelson Rising Lane, San Francisco, CA, 94158, USA.
| |
Collapse
|
46
|
Angrick M, Herff C, Mugler E, Tate MC, Slutzky MW, Krusienski DJ, Schultz T. Speech synthesis from ECoG using densely connected 3D convolutional neural networks. J Neural Eng 2019; 16:036019. [PMID: 30831567 PMCID: PMC6822609 DOI: 10.1088/1741-2552/ab0c59] [Citation(s) in RCA: 71] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVE Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood. However, it is unlikely that simple linear models can capture the relation between neural activity and continuous spoken speech. APPROACH Here we show that deep neural networks can be used to map ECoG from speech production areas onto an intermediate representation of speech (logMel spectrogram). The proposed method uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant. MAIN RESULTS In a study with six participants, we achieved correlations up to r = 0.69 between the reconstructed and original logMel spectrograms. We transfered our prediction back into an audible waveform by applying a Wavenet vocoder. The vocoder was conditioned on logMel features that harnessed a much larger, pre-existing data corpus to provide the most natural acoustic output. SIGNIFICANCE To the best of our knowledge, this is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks.
Collapse
Affiliation(s)
- Miguel Angrick
- Cognitive Systems Lab, University of Bremen, Bremen, Germany
| | | | | | | | | | | | | |
Collapse
|
47
|
Angrick M, Herff C, Johnson G, Shih J, Krusienski D, Schultz T. Interpretation of convolutional neural networks for speech spectrogram regression from intracranial recordings. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2018.10.080] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
48
|
Speech synthesis from neural decoding of spoken sentences. Nature 2019; 568:493-498. [DOI: 10.1038/s41586-019-1119-1] [Citation(s) in RCA: 322] [Impact Index Per Article: 53.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2018] [Accepted: 03/21/2019] [Indexed: 12/31/2022]
|
49
|
Origin and evolution of human speech: Emergence from a trimodal auditory, visual and vocal network. PROGRESS IN BRAIN RESEARCH 2019; 250:345-371. [PMID: 31703907 DOI: 10.1016/bs.pbr.2019.01.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
In recent years, there have been important additions to the classical model of speech processing as originally depicted by the Broca-Wernicke model consisting of an anterior, productive region and a posterior, perceptive region, both connected via the arcuate fasciculus. The modern view implies a separation into a dorsal and a ventral pathway conveying different kinds of linguistic information, which parallels the organization of the visual system. Furthermore, this organization is highly conserved in evolution and can be seen as the neural scaffolding from which the speech networks originated. In this chapter we emphasize that the speech networks are embedded in a multimodal system encompassing audio-vocal and visuo-vocal connections, which can be referred to an ancestral audio-visuo-motor pathway present in nonhuman primates. Likewise, we propose a trimodal repertoire for speech processing and acquisition involving auditory, visual and motor representations of the basic elements of speech: phoneme, observation of mouth movements, and articulatory processes. Finally, we discuss this proposal in the context of a scenario for early speech acquisition in infants and in human evolution.
Collapse
|