1
|
de Borman A, Wittevrongel B, Dauwe I, Carrette E, Meurs A, Van Roost D, Boon P, Van Hulle MM. Imagined speech event detection from electrocorticography and its transfer between speech modes and subjects. Commun Biol 2024; 7:818. [PMID: 38969758 PMCID: PMC11226700 DOI: 10.1038/s42003-024-06518-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Accepted: 06/27/2024] [Indexed: 07/07/2024] Open
Abstract
Speech brain-computer interfaces aim to support communication-impaired patients by translating neural signals into speech. While impressive progress was achieved in decoding performed, perceived and attempted speech, imagined speech remains elusive, mainly due to the absence of behavioral output. Nevertheless, imagined speech is advantageous since it does not depend on any articulator movements that might become impaired or even lost throughout the stages of a neurodegenerative disease. In this study, we analyzed electrocortigraphy data recorded from 16 participants in response to 3 speech modes: performed, perceived (listening), and imagined speech. We used a linear model to detect speech events and examined the contributions of each frequency band, from delta to high gamma, given the speech mode and electrode location. For imagined speech detection, we observed a strong contribution of gamma bands in the motor cortex, whereas lower frequencies were more prominent in the temporal lobe, in particular of the left hemisphere. Based on the similarities in frequency patterns, we were able to transfer models between speech modes and participants with similar electrode locations.
Collapse
Affiliation(s)
- Aurélie de Borman
- Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium.
| | | | - Ine Dauwe
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Evelien Carrette
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Alfred Meurs
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Dirk Van Roost
- Department of Neurosurgery, Ghent University Hospital, Ghent, Belgium
| | - Paul Boon
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Marc M Van Hulle
- Laboratory for Neuro- and Psychophysiology, KU Leuven, Leuven, Belgium
- Leuven Brain Institute (LBI), Leuven, Belgium
- Leuven Institute for Artificial Intelligence (Leuven.AI), Leuven, Belgium
| |
Collapse
|
2
|
Dash D, Teplansky K, Ferrari P, Babajani-Feremi A, Calley CS, Heitzman D, Austin SG, Wang J. Automatic detection of ALS from single-trial MEG signals during speech tasks: a pilot study. Front Psychol 2024; 15:1114811. [PMID: 38903475 PMCID: PMC11188989 DOI: 10.3389/fpsyg.2024.1114811] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2022] [Accepted: 05/24/2024] [Indexed: 06/22/2024] Open
Abstract
Amyotrophic lateral sclerosis (ALS) is an idiopathic, fatal, and fast-progressive neurodegenerative disease characterized by the degeneration of motor neurons. ALS patients often experience an initial misdiagnosis or a diagnostic delay due to the current unavailability of an efficient biomarker. Since impaired speech is typical in ALS, we hypothesized that functional differences between healthy and ALS participants during speech tasks can be explained by cortical pattern changes, thereby leading to the identification of a neural biomarker for ALS. In this pilot study, we collected magnetoencephalography (MEG) recordings from three early-diagnosed patients with ALS and three healthy controls during imagined (covert) and overt speech tasks. First, we computed sensor correlations, which showed greater correlations for speakers with ALS than healthy controls. Second, we compared the power of the MEG signals in canonical bands between the two groups, which showed greater dissimilarity in the beta band for ALS participants. Third, we assessed differences in functional connectivity, which showed greater beta band connectivity for ALS than healthy controls. Finally, we performed single-trial classification, which resulted in highest performance with beta band features (∼ 98%). These findings were consistent across trials, phrases, and participants for both imagined and overt speech tasks. Our preliminary results indicate that speech-evoked beta oscillations could be a potential neural biomarker for diagnosing ALS. To our knowledge, this is the first demonstration of the detection of ALS from single-trial neural signals.
Collapse
Affiliation(s)
- Debadatta Dash
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
| | - Kristin Teplansky
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Paul Ferrari
- Helen DeVos Children’s Hospital, Corewell Health, Grand Rapids, MI, United States
| | | | - Clifford S. Calley
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
| | | | - Sara G. Austin
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
| | - Jun Wang
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
3
|
Wandelt SK, Bjånes DA, Pejsa K, Lee B, Liu C, Andersen RA. Representation of internal speech by single neurons in human supramarginal gyrus. Nat Hum Behav 2024; 8:1136-1149. [PMID: 38740984 PMCID: PMC11199147 DOI: 10.1038/s41562-024-01867-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 03/16/2024] [Indexed: 05/16/2024]
Abstract
Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
Collapse
Affiliation(s)
- Sarah K Wandelt
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.
| | - David A Bjånes
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
| | - Kelsie Pejsa
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| | - Brian Lee
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Charles Liu
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- Rancho Los Amigos National Rehabilitation Center, Downey, CA, USA
- Department of Neurological Surgery, Keck School of Medicine of USC, Los Angeles, CA, USA
- USC Neurorestoration Center, Keck School of Medicine of USC, Los Angeles, CA, USA
| | - Richard A Andersen
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
- T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA
| |
Collapse
|
4
|
Zhang W, Jiang M, Teo KAC, Bhuvanakantham R, Fong L, Sim WKJ, Guo Z, Foo CHV, Chua RHJ, Padmanabhan P, Leong V, Lu J, Gulyás B, Guan C. Revealing the spatiotemporal brain dynamics of covert speech compared with overt speech: A simultaneous EEG-fMRI study. Neuroimage 2024; 293:120629. [PMID: 38697588 DOI: 10.1016/j.neuroimage.2024.120629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Revised: 04/17/2024] [Accepted: 04/29/2024] [Indexed: 05/05/2024] Open
Abstract
Covert speech (CS) refers to speaking internally to oneself without producing any sound or movement. CS is involved in multiple cognitive functions and disorders. Reconstructing CS content by brain-computer interface (BCI) is also an emerging technique. However, it is still controversial whether CS is a truncated neural process of overt speech (OS) or involves independent patterns. Here, we performed a word-speaking experiment with simultaneous EEG-fMRI. It involved 32 participants, who generated words both overtly and covertly. By integrating spatial constraints from fMRI into EEG source localization, we precisely estimated the spatiotemporal dynamics of neural activity. During CS, EEG source activity was localized in three regions: the left precentral gyrus, the left supplementary motor area, and the left putamen. Although OS involved more brain regions with stronger activations, CS was characterized by an earlier event-locked activation in the left putamen (peak at 262 ms versus 1170 ms). The left putamen was also identified as the only hub node within the functional connectivity (FC) networks of both OS and CS, while showing weaker FC strength towards speech-related regions in the dominant hemisphere during CS. Path analysis revealed significant multivariate associations, indicating an indirect association between the earlier activation in the left putamen and CS, which was mediated by reduced FC towards speech-related regions. These findings revealed the specific spatiotemporal dynamics of CS, offering insights into CS mechanisms that are potentially relevant for future treatment of self-regulation deficits, speech disorders, and development of BCI speech applications.
Collapse
Affiliation(s)
- Wei Zhang
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Muyun Jiang
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | - Kok Ann Colin Teo
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore; Division of Neurosurgery, National University Health System, Singapore
| | - Raghavan Bhuvanakantham
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - LaiGuan Fong
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore
| | - Wei Khang Jeremy Sim
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; IGP-Neuroscience, Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
| | - Zhiwei Guo
- School of Computer Science and Engineering, Nanyang Technological University, Singapore
| | | | | | - Parasuraman Padmanabhan
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore
| | - Victoria Leong
- Division of Psychology, Nanyang Technological University, Singapore; Department of Pediatrics, University of Cambridge, United Kingdom
| | - Jia Lu
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; DSO National Laboratories, Singapore; Yong Loo Lin School of Medicine, National University of Singapore, Singapore
| | - Balázs Gulyás
- Cognitive Neuroimaging Centre, Nanyang Technological University, Singapore; Lee Kong Chian School of Medicine, Nanyang Technological University, Singapore; Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University, Singapore.
| |
Collapse
|
5
|
Angrick M, Luo S, Rabbani Q, Candrea DN, Shah S, Milsap GW, Anderson WS, Gordon CR, Rosenblatt KR, Clawson L, Tippett DC, Maragakis N, Tenore FV, Fifer MS, Hermansky H, Ramsey NF, Crone NE. Online speech synthesis using a chronically implanted brain-computer interface in an individual with ALS. Sci Rep 2024; 14:9617. [PMID: 38671062 PMCID: PMC11053081 DOI: 10.1038/s41598-024-60277-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Accepted: 04/21/2024] [Indexed: 04/28/2024] Open
Abstract
Brain-computer interfaces (BCIs) that reconstruct and synthesize speech using brain activity recorded with intracranial electrodes may pave the way toward novel communication interfaces for people who have lost their ability to speak, or who are at high risk of losing this ability, due to neurological disorders. Here, we report online synthesis of intelligible words using a chronically implanted brain-computer interface (BCI) in a man with impaired articulation due to ALS, participating in a clinical trial (ClinicalTrials.gov, NCT03567213) exploring different strategies for BCI communication. The 3-stage approach reported here relies on recurrent neural networks to identify, decode and synthesize speech from electrocorticographic (ECoG) signals acquired across motor, premotor and somatosensory cortices. We demonstrate a reliable BCI that synthesizes commands freely chosen and spoken by the participant from a vocabulary of 6 keywords previously used for decoding commands to control a communication board. Evaluation of the intelligibility of the synthesized speech indicates that 80% of the words can be correctly recognized by human listeners. Our results show that a speech-impaired individual with ALS can use a chronically implanted BCI to reliably produce synthesized words while preserving the participant's voice profile, and provide further evidence for the stability of ECoG for speech-based BCIs.
Collapse
Affiliation(s)
- Miguel Angrick
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| | - Shiyu Luo
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Qinwan Rabbani
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, USA
| | - Daniel N Candrea
- Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Samyak Shah
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Griffin W Milsap
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - William S Anderson
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Chad R Gordon
- Department of Neurosurgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Section of Neuroplastic and Reconstructive Surgery, Department of Plastic Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Kathryn R Rosenblatt
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Anesthesiology & Critical Care Medicine, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Lora Clawson
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Donna C Tippett
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Otolaryngology-Head and Neck Surgery, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
- Department of Physical Medicine and Rehabilitation, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Nicholas Maragakis
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Francesco V Tenore
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Matthew S Fifer
- Research and Exploratory Development Department, Johns Hopkins Applied Physics Laboratory, Laurel, MD, USA
| | - Hynek Hermansky
- Center for Language and Speech Processing, The Johns Hopkins University, Baltimore, MD, USA
- Human Language Technology Center of Excellence, The Johns Hopkins University, Baltimore, MD, USA
| | - Nick F Ramsey
- UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Nathan E Crone
- Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, MD, USA.
| |
Collapse
|
6
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
7
|
Berezutskaya J, Freudenburg ZV, Vansteensel MJ, Aarnoutse EJ, Ramsey NF, van Gerven MAJ. Direct speech reconstruction from sensorimotor brain activity with optimized deep learning models. J Neural Eng 2023; 20:056010. [PMID: 37467739 PMCID: PMC10510111 DOI: 10.1088/1741-2552/ace8be] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 07/12/2023] [Accepted: 07/19/2023] [Indexed: 07/21/2023]
Abstract
Objective.Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field.Approach.In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task.Main results.We show that (1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; (2) individual word decoding in reconstructed speech achieves 92%-100% accuracy (chance level is 8%); (3) direct reconstruction from sensorimotor brain activity produces intelligible speech.Significance.These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.
Collapse
Affiliation(s)
- Julia Berezutskaya
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| | - Zachary V Freudenburg
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Mariska J Vansteensel
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Erik J Aarnoutse
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Nick F Ramsey
- Brain Center, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht 3584 CX, The Netherlands
| | - Marcel A J van Gerven
- Donders Center for Brain, Cognition and Behaviour, Nijmegen 6525 GD, The Netherlands
| |
Collapse
|
8
|
Li M, Pun SH, Chen F. Impacts of Cortical Regions on EEG-based Classification of Lexical Tones and Vowels in Spoken Speech. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083036 DOI: 10.1109/embc40787.2023.10340428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Speech impairment is one of the most serious problems for patients with communication disorders, e.g., stroke survivors. The brain-computer interface (BCI) systems have shown the potential to alternatively control or rehabilitate the neurological damages in speech production. The effects of different cortical regions in speech-based BCI systems are essential to be studied, which are favorable for improving the performance of speech-based BCI systems. This work aimed to explore the impacts of different speech-related cortical regions in the electroencephalogram (EEG) based classification of seventy spoken Mandarin monosyllables carrying four vowels and four lexical tones. Seven audible speech production-related cortical regions were studied, involving Broca's and Wernicke's areas, auditory cortex, motor cortex, prefrontal cortex, sensory cortex, left brain, right brain, and whole brain. Following the previous studies in which EEG signals were collected from ten subjects during Mandarin speech production, the features of EEG signals were extracted by the Riemannian manifold method, and a linear discriminant analysis (LDA) was regarded as a classifier to classify different vowels and lexical tones. The results showed that when using electrodes from whole brain, the classifier reached the best performances, which were 48.5% for lexical tones and 70.0% for vowels, respectively. The vowel classification results under Broca's and Wernicke's areas, auditory cortex, or prefrontal cortex were higher than those under the motor cortex or sensory cortex. No such differences were observed in the lexical tone classification task.
Collapse
|
9
|
Zhang H, Guo Z, Chen F. The Effects of Different Brain Regions on fNIRS-based Task-state Detection in Speech Imagery. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083588 DOI: 10.1109/embc40787.2023.10340896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Brain-computer interface (BCI) based on speech imagery can decode users' verbal intent and help people with motor disabilities communicate naturally. Functional near-infrared spectroscopy (fNIRS) is a commonly used brain signal acquisition method. Asynchronous BCI can response to control commands at any time, which provides great convenience for users. Task state detection, defined as identifying whether user starts or continues covertly articulating, plays an important role in speech imagery BCIs. To better distinguish task state from idle state during speech imagery, this work used fNIRS signals from different brain regions to study the effects of different brain regions on task state detection accuracy. The imagined tonal syllables included four lexical tones and four vowels in Mandarin Chinese. The brain regions that were measured included Broca's area, Wernicke's area, Superior temporal cortex and Motor cortex. Task state detection accuracies of imagining tonal monosyllables with four different tones were analyzed. The average accuracy of four speech imagery tasks based on the whole brain was 0.67 and it was close to 0.69, which was the average accuracy based on Broca's area. The accuracies of Broca's area and the whole brain were significantly higher than those of other brain regions. The findings of this work demonstrated that using a few channels of Broca's area could result in a similar task state detection accuracy to that using all the channels of the brain. Moreover, it was discovered that speech imagery with tone 2/3 tasks yielded higher task state detection accuracy than speech imagery with other tones.
Collapse
|
10
|
Soroush PZ, Herff C, Ries SK, Shih JJ, Schultz T, Krusienski DJ. The nested hierarchy of overt, mouthed, and imagined speech activity evident in intracranial recordings. Neuroimage 2023; 269:119913. [PMID: 36731812 DOI: 10.1016/j.neuroimage.2023.119913] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/05/2023] [Accepted: 01/29/2023] [Indexed: 02/01/2023] Open
Abstract
Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.
Collapse
|
11
|
Guo Z, Chen F. Decoding lexical tones and vowels in imagined tonal monosyllables using fNIRS signals. J Neural Eng 2022; 19. [PMID: 36317255 DOI: 10.1088/1741-2552/ac9e1d] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2022] [Accepted: 10/27/2022] [Indexed: 11/11/2022]
Abstract
Objective.Speech is a common way of communication. Decoding verbal intent could provide a naturalistic communication way for people with severe motor disabilities. Active brain computer interaction (BCI) speller is one of the most commonly used speech BCIs. To reduce the spelling time of Chinese words, identifying vowels and tones that are embedded in imagined Chinese words is essential. Functional near-infrared spectroscopy (fNIRS) has been widely used in BCI because it is portable, non-invasive, safe, low cost, and has a relatively high spatial resolution.Approach.In this study, an active BCI speller based on fNIRS is presented by covertly rehearsing tonal monosyllables with vowels (i.e. /a/, /i/, /o/, and /u/) and four lexical tones in Mandarin Chinese (i.e. tones 1, 2, 3, and 4) for 10 s.Main results.fNIRS results showed significant differences in the right superior temporal gyrus between imagined vowels with tone 2/3/4 and those with tone 1 (i.e. more activations and stronger connections to other brain regions for imagined vowels with tones 2/3/4 than for those with tone 1). Speech-related areas for tone imagery (i.e. the right hemisphere) provided majority of information for identifying tones, while the left hemisphere had advantages in vowel identification. Having decoded both vowels and tones during the post-stimulus 15 s period, the average classification accuracies exceeded 40% and 70% in multiclass (i.e. four classes) and binary settings, respectively. To spell words more quickly, the time window size for decoding was reduced from 15 s to 2.5 s while the classification accuracies were not significantly reduced.Significance.For the first time, this work demonstrated the possibility of discriminating lexical tones and vowels in imagined tonal syllables simultaneously. In addition, the reduced time window for decoding indicated that the spelling time of Chinese words could be significantly reduced in the fNIRS-based BCIs.
Collapse
Affiliation(s)
- Zengzhi Guo
- School of Electronics and Information Engineering, Harbin Institute of Technology, Harbin, People's Republic of China.,Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen, People's Republic of China
| |
Collapse
|
12
|
Cooney C, Folli R, Coyle D. Opportunities, pitfalls and trade-offs in designing protocols for measuring the neural correlates of speech. Neurosci Biobehav Rev 2022; 140:104783. [PMID: 35907491 DOI: 10.1016/j.neubiorev.2022.104783] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 07/12/2022] [Accepted: 07/15/2022] [Indexed: 11/25/2022]
Abstract
Decoding speech and speech-related processes directly from the human brain has intensified in studies over recent years as such a decoder has the potential to positively impact people with limited communication capacity due to disease or injury. Additionally, it can present entirely new forms of human-computer interaction and human-machine communication in general and facilitate better neuroscientific understanding of speech processes. Here, we synthesize the literature on neural speech decoding pertaining to how speech decoding experiments have been conducted, coalescing around a necessity for thoughtful experimental design aimed at specific research goals, and robust procedures for evaluating speech decoding paradigms. We examine the use of different modalities for presenting stimuli to participants, methods for construction of paradigms including timings and speech rhythms, and possible linguistic considerations. In addition, novel methods for eliciting naturalistic speech and validating imagined speech task performance in experimental settings are presented based on recent research. We also describe the multitude of terms used to instruct participants on how to produce imagined speech during experiments and propose methods for investigating the effect of these terms on imagined speech decoding. We demonstrate that the range of experimental procedures used in neural speech decoding studies can have unintended consequences which can impact upon the efficacy of the knowledge obtained. The review delineates the strengths and weaknesses of present approaches and poses methodological advances which we anticipate will enhance experimental design, and progress toward the optimal design of movement independent direct speech brain-computer interfaces.
Collapse
Affiliation(s)
- Ciaran Cooney
- Intelligent Systems Research Centre, Ulster University, Derry, UK.
| | - Raffaella Folli
- Institute for Research in Social Sciences, Ulster University, Jordanstown, UK
| | - Damien Coyle
- Intelligent Systems Research Centre, Ulster University, Derry, UK
| |
Collapse
|
13
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
14
|
Lee KW, Lee DH, Kim SJ, Lee SW. Decoding Neural Correlation of Language-Specific Imagined Speech using EEG Signals. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1977-1980. [PMID: 36086641 DOI: 10.1109/embc48229.2022.9871721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Speech impairments due to cerebral lesions and degenerative disorders can be devastating. For humans with severe speech deficits, imagined speech in the brain-computer interface has been a promising hope for reconstructing the neural signals of speech production. However, studies in the EEG-based imagined speech domain still have some limitations due to high variability in spatial and temporal information and low signal-to-noise ratio. In this paper, we investigated the neural signals for two groups of native speakers with two tasks with different languages, English and Chinese. Our assumption was that English, a non-tonal and phonogram-based language, would have spectral differences in neural computation compared to Chinese, a tonal and ideogram-based language. The results showed the significant difference in the relative power spectral density between English and Chinese in specific frequency band groups. Also, the spatial evaluation of Chinese native speakers in the theta band was distinctive during the imagination task. Hence, this paper would suggest the key spectral and spatial information of word imagination with specialized language while decoding the neural signals of speech. Clinical Relevance- Imagined speech-related studies lead to the development of assistive communication technology especially for patients with speech disorders such as aphasia due to brain damage. This study suggests significant spectral features by analyzing cross-language differences of EEG-based imagined speech using two widely used languages.
Collapse
|
15
|
刘 艳, 龚 安, 丁 鹏, 赵 磊, 钱 谦, 周 建, 苏 磊, 伏 云. [Key technology of brain-computer interaction based on speech imagery]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2022; 39:596-611. [PMID: 35788530 PMCID: PMC10950764 DOI: 10.7507/1001-5515.202107018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Revised: 04/14/2022] [Indexed: 06/15/2023]
Abstract
Speech expression is an important high-level cognitive behavior of human beings. The realization of this behavior is closely related to human brain activity. Both true speech expression and speech imagination can activate part of the same brain area. Therefore, speech imagery becomes a new paradigm of brain-computer interaction. Brain-computer interface (BCI) based on speech imagery has the advantages of spontaneous generation, no training, and friendliness to subjects, so it has attracted the attention of many scholars. However, this interactive technology is not mature in the design of experimental paradigms and the choice of imagination materials, and there are many issues that need to be discussed urgently. Therefore, in response to these problems, this article first expounds the neural mechanism of speech imagery. Then, by reviewing the previous BCI research of speech imagery, the mainstream methods and core technologies of experimental paradigm, imagination materials, data processing and so on are systematically analyzed. Finally, the key problems and main challenges that restrict the development of this type of BCI are discussed. And the future development and application perspective of the speech imaginary BCI system are prospected.
Collapse
Affiliation(s)
- 艳鹏 刘
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 安民 龚
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 鹏 丁
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 磊 赵
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 谦 钱
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 建华 周
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 磊 苏
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
| | - 云发 伏
- 昆明理工大学 信息工程与自动化学院(昆明 650500)School of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 昆明理工大学 脑认知与脑机智能融合创新团队(昆明 650500)Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, P. R. China
- 武警工程大学 信息工程学院(西安 710000)College of Information Engineering, Engineering University of PAP, Xi’an 710000, P. R. China
- 昆明理工大学 理学院(昆明 650500)Faculty of Science, Kunming University of Science and Technology, Kunming 650500, P. R. China
| |
Collapse
|
16
|
Chandler JA, Van der Loos KI, Boehnke S, Beaudry JS, Buchman DZ, Illes J. Brain Computer Interfaces and Communication Disabilities: Ethical, Legal, and Social Aspects of Decoding Speech From the Brain. Front Hum Neurosci 2022; 16:841035. [PMID: 35529778 PMCID: PMC9069963 DOI: 10.3389/fnhum.2022.841035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 03/03/2022] [Indexed: 11/28/2022] Open
Abstract
A brain-computer interface technology that can decode the neural signals associated with attempted but unarticulated speech could offer a future efficient means of communication for people with severe motor impairments. Recent demonstrations have validated this approach. Here we assume that it will be possible in future to decode imagined (i.e., attempted but unarticulated) speech in people with severe motor impairments, and we consider the characteristics that could maximize the social utility of a BCI for communication. As a social interaction, communication involves the needs and goals of both speaker and listener, particularly in contexts that have significant potential consequences. We explore three high-consequence legal situations in which neurally-decoded speech could have implications: Testimony, where decoded speech is used as evidence; Consent and Capacity, where it may be used as a means of agency and participation such as consent to medical treatment; and Harm, where such communications may be networked or may cause harm to others. We then illustrate how design choices might impact the social and legal acceptability of these technologies.
Collapse
Affiliation(s)
- Jennifer A. Chandler
- Bertram Loeb Research Chair, Faculty of Law, University of Ottawa, Ottawa, ON, Canada
- *Correspondence: Jennifer A. Chandler,
| | | | - Susan Boehnke
- Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada
| | - Jonas S. Beaudry
- Institute for Health and Social Policy (IHSP) and Faculty of Law, McGill University, Montreal, QC, Canada
| | - Daniel Z. Buchman
- Centre for Addiction and Mental Health, Dalla Lana School of Public Health, Krembil Research Institute, University of Toronto Joint Centre for Bioethics, Toronto, ON, Canada
| | - Judy Illes
- Division of Neurology, Department of Medicine, University of British Columbia, Vancouver, BC, Canada
| |
Collapse
|
17
|
Nieto N, Peterson V, Rufiner HL, Kamienkowski JE, Spies R. Thinking out loud, an open-access EEG-based BCI dataset for inner speech recognition. Sci Data 2022; 9:52. [PMID: 35165308 PMCID: PMC8844234 DOI: 10.1038/s41597-022-01147-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 12/23/2021] [Indexed: 12/11/2022] Open
Abstract
Surface electroencephalography is a standard and noninvasive way to measure electrical brain activity. Recent advances in artificial intelligence led to significant improvements in the automatic detection of brain patterns, allowing increasingly faster, more reliable and accessible Brain-Computer Interfaces. Different paradigms have been used to enable the human-machine interaction and the last few years have broad a mark increase in the interest for interpreting and characterizing the "inner voice" phenomenon. This paradigm, called inner speech, raises the possibility of executing an order just by thinking about it, allowing a "natural" way of controlling external devices. Unfortunately, the lack of publicly available electroencephalography datasets, restricts the development of new techniques for inner speech recognition. A ten-participant dataset acquired under this and two others related paradigms, recorded with an acquisition system of 136 channels, is presented. The main purpose of this work is to provide the scientific community with an open-access multiclass electroencephalography database of inner speech commands that could be used for better understanding of the related brain mechanisms.
Collapse
Affiliation(s)
- Nicolás Nieto
- Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc(i), FICH-UNL/CONICET, Santa Fe, Argentina.
- Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina.
| | - Victoria Peterson
- Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina
| | - Hugo Leonardo Rufiner
- Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc(i), FICH-UNL/CONICET, Santa Fe, Argentina
- Laboratorio de Cibernética, Universidad Nacional de Entre Ríos, FI-UNER, Oro Verde, Argentina
| | - Juan Esteban Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación, Universidad de Buenos Aires - CONICET, Ciudad Autónoma de Buenos Aires, Argentina
| | - Ruben Spies
- Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina
| |
Collapse
|
18
|
Proix T, Delgado Saa J, Christen A, Martin S, Pasley BN, Knight RT, Tian X, Poeppel D, Doyle WK, Devinsky O, Arnal LH, Mégevand P, Giraud AL. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features. Nat Commun 2022; 13:48. [PMID: 35013268 PMCID: PMC8748882 DOI: 10.1038/s41467-021-27725-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023] Open
Abstract
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
Collapse
Affiliation(s)
- Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Jaime Delgado Saa
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Andy Christen
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Stephanie Martin
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Brian N Pasley
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
- Department of Psychology, University of California, Berkeley, Berkeley, USA
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Werner K Doyle
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Luc H Arnal
- Institut de l'Audition, Institut Pasteur, INSERM, F-75012, Paris, France
| | - Pierre Mégevand
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Neurology, Geneva University Hospitals, Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
19
|
Kiroy V, Bakhtin O, Krivko E, Lazurenko D, Aslanyan E, Shaposhnikov D, Shcherban I. Spoken and Inner Speech-related EEG Connectivity in Different Spatial Direction. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103224] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
|
20
|
Cooney C, Folli R, Coyle D. A bimodal deep learning architecture for EEG-fNIRS decoding of overt and imagined speech. IEEE Trans Biomed Eng 2021; 69:1983-1994. [PMID: 34874850 DOI: 10.1109/tbme.2021.3132861] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCI) studies are increasingly leveraging different attributes of multiple signal modalities simultaneously. Bimodal data acquisition protocols combining the temporal resolution of electroencephalography (EEG) with the spatial resolution of functional near-infrared spectroscopy (fNIRS) require novel approaches to decoding. METHODS We present an EEG-fNIRS Hybrid BCI that employs a new bimodal deep neural network architecture consisting of two convolutional sub-networks (subnets) to decode overt and imagined speech. Features from each subnet are fused before further feature extraction and classification. Nineteen participants performed overt and imagined speech in a novel cue-based paradigm enabling investigation of stimulus and linguistic effects on decoding. RESULTS Using the hybrid approach, classification accuracies (46.31% and 34.29% for overt and imagined speech, respectively (chance: 25%)) indicated a significant improvement on EEG used independently for imagined speech (p=0.020) while tending towards significance for overt speech (p=0.098). In comparison with fNIRS, significant improvements for both speech-types were achieved with bimodal decoding (p<0.001). There was a mean difference of ~12.02% between overt and imagined speech with accuracies as high as 87.18% and 53%. Deeper subnets enhanced performance while stimulus effected overt and imagined speech in significantly different ways. CONCLUSION The bimodal approach was a significant improvement on unimodal results for several tasks. Results indicate the potential of multi-modal deep learning for enhancing neural signal decoding. SIGNIFICANCE This novel architecture can be used to enhance speech decoding from bimodal neural signals.
Collapse
|
21
|
Dash D, Ferrari P, Babajani-Feremi A, Borna A, Schwindt PDD, Wang J. Magnetometers vs Gradiometers for Neural Speech Decoding. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:6543-6546. [PMID: 34892608 DOI: 10.1109/embc46164.2021.9630489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Neural speech decoding aims at providing natural rate communication assistance to patients with locked-in state (e.g. due to amyotrophic lateral sclerosis, ALS) in contrast to the traditional brain-computer interface (BCI) spellers which are slow. Recent studies have shown that Magnetoencephalography (MEG) is a suitable neuroimaging modality to study neural speech decoding considering its excellent temporal resolution that can characterize the fast dynamics of speech. Gradiometers have been the preferred choice for sensor space analysis with MEG, due to their efficacy in noise suppression over magnetometers. However, recent development of optically pumped magnetometers (OPM) based wearable-MEG devices have shown great potential in future BCI applications, yet, no prior study has evaluated the performance of magnetometers in neural speech decoding. In this study, we decoded imagined and spoken speech from the MEG signals of seven healthy participants and compared the performance of magnetometers and gradiometers. Experimental results indicated that magnetometers also have the potential for neural speech decoding, although the performance was significantly lower than that obtained with gradiometers. Further, we implemented a wavelet based denoising strategy that improved the performance of both magnetometers and gradiometers significantly. These findings reconfirm that gradiometers are preferable in MEG based decoding analysis but also provide the possibility towards the use of magnetometers (or OPMs) for the development of the next-generation speech-BCIs.
Collapse
|
22
|
Si X, Li S, Xiang S, Yu J, Ming D. Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex. J Neural Eng 2021; 18. [PMID: 34507311 DOI: 10.1088/1741-2552/ac25d9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. Decoding imagined speech from brain signals could provide a more natural, user-friendly way for developing the next generation of the brain-computer interface (BCI). With the advantages of non-invasive, portable, relatively high spatial resolution and insensitivity to motion artifacts, the functional near-infrared spectroscopy (fNIRS) shows great potential for developing the non-invasive speech BCI. However, there is a lack of fNIRS evidence in uncovering the neural mechanism of imagined speech. Our goal is to investigate the specific brain regions and the corresponding cortico-cortical functional connectivity features during imagined speech with fNIRS.Approach. fNIRS signals were recorded from 13 subjects' bilateral motor and prefrontal cortex during overtly and covertly repeating words. Cortical activation was determined through the mean oxygen-hemoglobin concentration changes, and functional connectivity was calculated by Pearson's correlation coefficient.Main results. (a) The bilateral dorsal motor cortex was significantly activated during the covert speech, whereas the bilateral ventral motor cortex was significantly activated during the overt speech. (b) As a subregion of the motor cortex, sensorimotor cortex (SMC) showed a dominant dorsal response to covert speech condition, whereas a dominant ventral response to overt speech condition. (c) Broca's area was deactivated during the covert speech but activated during the overt speech. (d) Compared to overt speech, dorsal SMC(dSMC)-related functional connections were enhanced during the covert speech.Significance. We provide fNIRS evidence for the involvement of dSMC in speech imagery. dSMC is the speech imagery network's key hub and is probably involved in the sensorimotor information processing during the covert speech. This study could inspire the BCI community to focus on the potential contribution of dSMC during speech imagery.
Collapse
Affiliation(s)
- Xiaopeng Si
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China.,Institute of Applied Psychology, Tianjin University, Tianjin 300350, People's Republic of China
| | - Sicheng Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Shaoxin Xiang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Jiayue Yu
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
23
|
Sarmiento LC, Villamizar S, López O, Collazos AC, Sarmiento J, Rodríguez JB. Recognition of EEG Signals from Imagined Vowels Using Deep Learning Methods. SENSORS (BASEL, SWITZERLAND) 2021; 21:6503. [PMID: 34640824 PMCID: PMC8512781 DOI: 10.3390/s21196503] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Revised: 09/17/2021] [Accepted: 09/24/2021] [Indexed: 01/27/2023]
Abstract
The use of imagined speech with electroencephalographic (EEG) signals is a promising field of brain-computer interfaces (BCI) that seeks communication between areas of the cerebral cortex related to language and devices or machines. However, the complexity of this brain process makes the analysis and classification of this type of signals a relevant topic of research. The goals of this study were: to develop a new algorithm based on Deep Learning (DL), referred to as CNNeeg1-1, to recognize EEG signals in imagined vowel tasks; to create an imagined speech database with 50 subjects specialized in imagined vowels from the Spanish language (/a/,/e/,/i/,/o/,/u/); and to contrast the performance of the CNNeeg1-1 algorithm with the DL Shallow CNN and EEGNet benchmark algorithms using an open access database (BD1) and the newly developed database (BD2). In this study, a mixed variance analysis of variance was conducted to assess the intra-subject and inter-subject training of the proposed algorithms. The results show that for intra-subject training analysis, the best performance among the Shallow CNN, EEGNet, and CNNeeg1-1 methods in classifying imagined vowels (/a/,/e/,/i/,/o/,/u/) was exhibited by CNNeeg1-1, with an accuracy of 65.62% for BD1 database and 85.66% for BD2 database.
Collapse
Affiliation(s)
- Luis Carlos Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Sergio Villamizar
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| | - Omar López
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Ana Claros Collazos
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jhon Sarmiento
- Departamento de Tecnología, Universidad Pedagógica Nacional, Bogotá 111321, Colombia; (O.L.); (A.C.C.); (J.S.)
| | - Jan Bacca Rodríguez
- Department of Electrical and Electronics Engineering, School of Engineering, Universidad Nacional de Colombia, Bogotá 111321, Colombia; (S.V.); (J.B.R.)
| |
Collapse
|
24
|
Lee DY, Lee M, Lee SW. Decoding Imagined Speech Based on Deep Metric Learning for Intuitive BCI Communication. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1363-1374. [PMID: 34255630 DOI: 10.1109/tnsre.2021.3096874] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Imagined speech is a highly promising paradigm due to its intuitive application and multiclass scalability in the field of brain-computer interfaces. However, optimal feature extraction and classifiers have not yet been established. Furthermore, retraining still requires a large number of trials when new classes are added. The aim of this study is (i) to increase the classification performance for imagined speech and (ii) to apply a new class using a pretrained classifier with a small number of trials. We propose a novel framework based on deep metric learning that learns the distance by comparing the similarity between samples. We also applied the instantaneous frequency and spectral entropy used for speech signals to electroencephalography signals during imagined speech. The method was evaluated on two public datasets (6-class Coretto DB and 5-class BCI Competition DB). We achieved a 6-class accuracy of 45.00 ± 3.13% and a 5-class accuracy of 48.10 ± 3.68% using the proposed method, which significantly outperformed state-of-the-art methods. Additionally, we verified that the new class could be detected through incremental learning with a small number of trials. As a result, the average accuracy is 44.50 ± 0.26% for Coretto DB and 47.12 ± 0.27% for BCI Competition DB, which shows similar accuracy to baseline accuracy without incremental learning. Our results have shown that the accuracy can be greatly improved even with a small number of trials by selecting appropriate features from imagined speech. The proposed framework could be directly used to help construct an extensible intuitive communication system based on brain-computer interfaces.
Collapse
|
25
|
Gao X, Wang Y, Chen X, Gao S. Interface, interaction, and intelligence in generalized brain-computer interfaces. Trends Cogn Sci 2021; 25:671-684. [PMID: 34116918 DOI: 10.1016/j.tics.2021.04.003] [Citation(s) in RCA: 52] [Impact Index Per Article: 17.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 03/07/2021] [Accepted: 04/05/2021] [Indexed: 11/16/2022]
Abstract
A brain-computer interface (BCI) establishes a direct communication channel between a brain and an external device. With recent advances in neurotechnology and artificial intelligence (AI), the brain signals in BCI communication have been advanced from sensation and perception to higher-level cognition activities. While the field of BCI has grown rapidly in the past decades, the core technologies and innovative ideas behind seemingly unrelated BCI systems have never been summarized from an evolutionary point of view. Here, we review various BCI paradigms and present an evolutionary model of generalized BCI technology which comprises three stages: interface, interaction, and intelligence (I3). We also highlight challenges, opportunities, and future perspectives in the development of new BCI technology.
Collapse
Affiliation(s)
- Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences, Tianjin, China
| | - Shangkai Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.
| |
Collapse
|
26
|
Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.11.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
27
|
Dash D, Wisler A, Ferrari P, Davenport EM, Maldjian J, Wang J. MEG Sensor Selection for Neural Speech Decoding. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2020; 8:182320-182337. [PMID: 33204579 PMCID: PMC7668411 DOI: 10.1109/access.2020.3028831] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Direct decoding of speech from the brain is a faster alternative to current electroencephalography (EEG) speller-based brain-computer interfaces (BCI) in providing communication assistance to locked-in patients. Magnetoencephalography (MEG) has recently shown great potential as a non-invasive neuroimaging modality for neural speech decoding, owing in part to its spatial selectivity over other high-temporal resolution devices. Standard MEG systems have a large number of cryogenically cooled channels/sensors (200 - 300) encapsulated within a fixed liquid helium dewar, precluding their use as wearable BCI devices. Fortunately, recently developed optically pumped magnetometers (OPM) do not require cryogens, and have the potential to be wearable and movable making them more suitable for BCI applications. This design is also modular allowing for customized montages to include only the sensors necessary for a particular task. As the number of sensors bears a heavy influence on the cost, size, and weight of MEG systems, minimizing the number of sensors is critical for designing practical MEG-based BCIs in the future. In this study, we sought to identify an optimal set of MEG channels to decode imagined and spoken phrases from the MEG signals. Using a forward selection algorithm with a support vector machine classifier we found that nine optimally located MEG gradiometers provided higher decoding accuracy compared to using all channels. Additionally, the forward selection algorithm achieved similar performance to dimensionality reduction using a stacked-sparse-autoencoder. Analysis of spatial dynamics of speech decoding suggested that both left and right hemisphere sensors contribute to speech decoding. Sensors approximately located near Broca's area were found to be commonly contributing among the higher-ranked sensors across all subjects.
Collapse
Affiliation(s)
- Debadatta Dash
- Department of Electrical and Computer Engineering, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX 78712, USA
| | - Alan Wisler
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX 78712, USA
| | - Paul Ferrari
- MEG Laboratory, Dell Children's Medical Center, Austin, TX 78723, USA
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712, USA
| | | | - Joseph Maldjian
- Department of Radiology, University of Texas at Southwestern, Dallas, TX 75390, USA
| | - Jun Wang
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX 78712, USA
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX 78712, USA
| |
Collapse
|
28
|
Cooney C, Korik A, Folli R, Coyle D. Evaluation of Hyperparameter Optimization in Machine and Deep Learning Methods for Decoding Imagined Speech EEG. SENSORS 2020; 20:s20164629. [PMID: 32824559 PMCID: PMC7472624 DOI: 10.3390/s20164629] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 08/10/2020] [Accepted: 08/13/2020] [Indexed: 01/12/2023]
Abstract
Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain-computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across several domains. However, it remains an open question whether DL methods provide significant advances over traditional machine learning (ML) approaches for classification of imagined speech. Furthermore, hyperparameter (HP) optimization has been neglected in DL-EEG studies, resulting in the significance of its effects remaining uncertain. In this study, we aim to improve classification of imagined speech EEG by employing DL methods while also statistically evaluating the impact of HP optimization on classifier performance. We trained three distinct convolutional neural networks (CNN) on imagined speech EEG using a nested cross-validation approach to HP optimization. Each of the CNNs evaluated was designed specifically for EEG decoding. An imagined speech EEG dataset consisting of both words and vowels facilitated training on both sets independently. CNN results were compared with three benchmark ML methods: Support Vector Machine, Random Forest and regularized Linear Discriminant Analysis. Intra- and inter-subject methods of HP optimization were tested and the effects of HPs statistically analyzed. Accuracies obtained by the CNNs were significantly greater than the benchmark methods when trained on both datasets (words: 24.97%, p < 1 × 10-7, chance: 16.67%; vowels: 30.00%, p < 1 × 10-7, chance: 20%). The effects of varying HP values, and interactions between HPs and the CNNs were both statistically significant. The results of HP optimization demonstrate how critical it is for training CNNs to decode imagined speech.
Collapse
Affiliation(s)
- Ciaran Cooney
- Intelligent Systems Research Centre, Ulster University, Londonderry BT48 7JL, UK; (A.K.); (D.C.)
- Correspondence:
| | - Attila Korik
- Intelligent Systems Research Centre, Ulster University, Londonderry BT48 7JL, UK; (A.K.); (D.C.)
| | - Raffaella Folli
- Institute for Research in Social Sciences, Ulster University, Jordanstown BT37 0QB, UK;
| | - Damien Coyle
- Intelligent Systems Research Centre, Ulster University, Londonderry BT48 7JL, UK; (A.K.); (D.C.)
| |
Collapse
|
29
|
Dash D, Ferrari P, Wang J. Decoding Imagined and Spoken Phrases From Non-invasive Neural (MEG) Signals. Front Neurosci 2020; 14:290. [PMID: 32317917 PMCID: PMC7154084 DOI: 10.3389/fnins.2020.00290] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Speech production is a hierarchical mechanism involving the synchronization of the brain and the oral articulators, where the intention of linguistic concepts is transformed into meaningful sounds. Individuals with locked-in syndrome (fully paralyzed but aware) lose their motor ability completely including articulation and even eyeball movement. The neural pathway may be the only option to resume a certain level of communication for these patients. Current brain-computer interfaces (BCIs) use patients' visual and attentional correlates to build communication, resulting in a slow communication rate (a few words per minute). Direct decoding of imagined speech from the neural signals (and then driving a speech synthesizer) has the potential for a higher communication rate. In this study, we investigated the decoding of five imagined and spoken phrases from single-trial, non-invasive magnetoencephalography (MEG) signals collected from eight adult subjects. Two machine learning algorithms were used. One was an artificial neural network (ANN) with statistical features as the baseline approach. The other was convolutional neural networks (CNNs) applied on the spatial, spectral and temporal features extracted from the MEG signals. Experimental results indicated the possibility to decode imagined and spoken phrases directly from neuromagnetic signals. CNNs were found to be highly effective with an average decoding accuracy of up to 93% for the imagined and 96% for the spoken phrases.
Collapse
Affiliation(s)
- Debadatta Dash
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
| | - Paul Ferrari
- MEG Lab, Dell Children's Medical Center, Austin, TX, United States
- Department of Psychology, University of Texas at Austin, Austin, TX, United States
| | - Jun Wang
- Department of Neurology, Dell Medical School, University of Texas at Austin, Austin, TX, United States
- Department of Communication Sciences and Disorders, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|