1
|
Li G, Zhong D, Zhang N, Dong J, Yan Y, Xu Q, Xu S, Yang L, Hao D, Li CSR. The inter-related effects of alcohol use severity and sleep deficiency on semantic processing in young adults. Neuroscience 2024; 555:116-124. [PMID: 39059740 DOI: 10.1016/j.neuroscience.2024.07.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 06/24/2024] [Accepted: 07/16/2024] [Indexed: 07/28/2024]
Abstract
BACKGROUND Both alcohol misuse and sleep deficiency are associated with deficits in semantic processing. However, alcohol misuse and sleep deficiency are frequently comorbid and their inter-related effects on semantic processing as well as the underlying neural mechanisms remain to be investigated. METHODS We curated the Human Connectome Project data of 973 young adults (508 women) to examine the neural correlates of semantic processing in link with the severity of alcohol use and sleep deficiency. The latter were each evaluated using the first principal component (PC1) of principal component analysis of all drinking metrics and the Pittsburgh Sleep Quality Index (PSQI). We employed path modeling to elucidate the interplay among clinical, behavioral, and neural variables. RESULTS Among women, we observed a significant negative correlation between the left precentral gyrus (PCG) and PSQI scores. Mediation analysis revealed that the left PCG activity fully mediated the relationship between PSQI scores and word comprehension in language tasks. In women alone also, the right middle frontal gyrus (MFG) exhibited a significant negative correlation with PC1. The best path model illustrated the associations among PC1, PSQI scores, PCG activity, and MFG activation during semantic processing in women. CONCLUSIONS Alcohol misuse may lead to reduced MFG activation while sleep deficiency hinder semantic processing by suppressing PCG activity in women. The pathway model underscores the influence of sleep quality and alcohol consumption severity on semantic processing in women, suggesting that sex differences in these effects need to be further investigated.
Collapse
Affiliation(s)
- Guangfei Li
- Department of Biomedical Engineering, College of Chemistry and Life Science, Beijing University of Technology, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing, China
| | - Dandan Zhong
- Department of Biomedical Engineering, College of Chemistry and Life Science, Beijing University of Technology, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing, China
| | - Ning Zhang
- Department of Neuropsychiatry and Behavioral Neurology and Clinical Psychology, Sleep Center, Department of Neurology, China National Clinical Research Center of Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Jianyu Dong
- Department of Neuropsychiatry and Behavioral Neurology and Clinical Psychology, Sleep Center, Department of Neurology, China National Clinical Research Center of Neurological Diseases, Beijing Tiantan Hospital, Capital Medical University, Beijing, China
| | - Yan Yan
- The First Hospital of Hebei Medical University, Shijiazhuang, Hebei, China
| | - Qixiao Xu
- Physical Education Department, Beijing University of Technology, Beijing, China
| | - Shuchun Xu
- Traditional Chinese Medicine Department, the University Hospital of Beijing University of Technology, Beijing, China
| | - Lin Yang
- Department of Biomedical Engineering, College of Chemistry and Life Science, Beijing University of Technology, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing, China
| | - Dongmei Hao
- Department of Biomedical Engineering, College of Chemistry and Life Science, Beijing University of Technology, Beijing International Science and Technology Cooperation Base for Intelligent Physiological Measurement and Clinical Transformation, Beijing, China.
| | - Chiang-Shan R Li
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA; Department of Neuroscience, Yale University School of Medicine, New Haven, CT, USA; Wu Tsai Institute, Yale University, New Haven, CT, USA.
| |
Collapse
|
2
|
Yasuhara M, Uehara K, Oku T, Shiotani S, Nambu I, Furuya S. Robustness and adaptability of sensorimotor skills in expert piano performance. iScience 2024; 27:110400. [PMID: 39156646 PMCID: PMC11326920 DOI: 10.1016/j.isci.2024.110400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2023] [Revised: 03/31/2024] [Accepted: 06/25/2024] [Indexed: 08/20/2024] Open
Abstract
Skillful sequential action requires the delicate balance of sensorimotor control, encompassing both robustness and adaptability. However, it remains unknown whether both motor and neural responses triggered by sensory perturbation undergo plastic adaptation as a consequence of extensive sensorimotor experience. We assessed the effects of transiently delayed tone production on the subsequent motor actions and event-related potentials (ERPs) during piano performance by comparing pianists and non-musicians. Following the perturbation, the inter-keystroke interval was abnormally prolonged in non-musicians but not in pianists. By contrast, the keystroke velocity following the perturbation was increased only in the pianists. A regression model demonstrated that the change in the inter-keystroke interval covaried with the ERPs, particularly at the frontal and parietal regions. The alteration in the keystroke velocity was associated with the P300 component of the temporal region. These findings suggest that different neural mechanisms underlie robust and adaptive sensorimotor skills across proficiency level.
Collapse
Affiliation(s)
- Masaki Yasuhara
- Department of Science of Technology Innovation, Nagaoka University of Technology, Nagaoka 9402137, Japan
| | - Kazumasa Uehara
- Tokyo Research, Sony Computer Science Laboratories Inc, Tokyo 1410022, Japan
- Department of Computer Science and Engineering, Toyohashi University of Technology, Toyohashi 4418580, Japan
| | - Takanori Oku
- Tokyo Research, Sony Computer Science Laboratories Inc, Tokyo 1410022, Japan
- NeuroPiano Institute, Kyoto 6008086, Japan
| | - Sachiko Shiotani
- Tokyo Research, Sony Computer Science Laboratories Inc, Tokyo 1410022, Japan
- NeuroPiano Institute, Kyoto 6008086, Japan
| | - Isao Nambu
- Graduate School of Engineering, Nagaoka University of Technology, Nagaoka 9402137, Japan
| | - Shinichi Furuya
- Tokyo Research, Sony Computer Science Laboratories Inc, Tokyo 1410022, Japan
- NeuroPiano Institute, Kyoto 6008086, Japan
| |
Collapse
|
3
|
Zada Z, Goldstein A, Michelmann S, Simony E, Price A, Hasenfratz L, Barham E, Zadbood A, Doyle W, Friedman D, Dugan P, Melloni L, Devore S, Flinker A, Devinsky O, Nastase SA, Hasson U. A shared model-based linguistic space for transmitting our thoughts from brain to brain in natural conversations. Neuron 2024:S0896-6273(24)00460-4. [PMID: 39096896 DOI: 10.1016/j.neuron.2024.06.025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 03/26/2024] [Accepted: 06/25/2024] [Indexed: 08/05/2024]
Abstract
Effective communication hinges on a mutual understanding of word meaning in different contexts. We recorded brain activity using electrocorticography during spontaneous, face-to-face conversations in five pairs of epilepsy patients. We developed a model-based coupling framework that aligns brain activity in both speaker and listener to a shared embedding space from a large language model (LLM). The context-sensitive LLM embeddings allow us to track the exchange of linguistic information, word by word, from one brain to another in natural conversations. Linguistic content emerges in the speaker's brain before word articulation and rapidly re-emerges in the listener's brain after word articulation. The contextual embeddings better capture word-by-word neural alignment between speaker and listener than syntactic and articulatory models. Our findings indicate that the contextual embeddings learned by LLMs can serve as an explicit numerical model of the shared, context-rich meaning space humans use to communicate their thoughts to one another.
Collapse
Affiliation(s)
- Zaid Zada
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA.
| | - Ariel Goldstein
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Department of Cognitive and Brain Sciences and Business School, Hebrew University, Jerusalem 9190501, Israel
| | - Sebastian Michelmann
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Erez Simony
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Faculty of Engineering, Holon Institute of Technology, Holon 5810201, Israel
| | - Amy Price
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Liat Hasenfratz
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Emily Barham
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Asieh Zadbood
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA; Department of Psychology, Columbia University, New York, NY 10027, USA
| | - Werner Doyle
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Daniel Friedman
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Patricia Dugan
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Lucia Melloni
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Sasha Devore
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Adeen Flinker
- Grossman School of Medicine, New York University, New York, NY 10016, USA; Tandon School of Engineering, New York University, New York, NY 10016, USA
| | - Orrin Devinsky
- Grossman School of Medicine, New York University, New York, NY 10016, USA
| | - Samuel A Nastase
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| | - Uri Hasson
- Princeton Neuroscience Institute and Department of Psychology, Princeton University, Princeton, NJ 08544, USA
| |
Collapse
|
4
|
Vissani M, Bush A, Lipski WJ, Fischer P, Neudorfer C, Holt LL, Fiez JA, Turner RS, Richardson RM. Spike-phase coupling of subthalamic neurons to posterior opercular cortex predicts speech sound accuracy. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.18.562969. [PMID: 37905141 PMCID: PMC10614892 DOI: 10.1101/2023.10.18.562969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Speech provides a rich context for understanding how cortical interactions with the basal ganglia contribute to unique human behaviors, but opportunities for direct intracranial recordings across cortical-basal ganglia networks are rare. We recorded electrocorticographic signals in the cortex synchronously with single units in the basal ganglia during awake neurosurgeries where subjects spoke syllable repetitions. We discovered that individual STN neurons have transient (200ms) spike-phase coupling (SPC) events with multiple cortical regions. The spike timing of STN neurons was coordinated with the phase of theta-alpha oscillations in the posterior supramarginal and superior temporal gyrus during speech planning and production. Speech sound errors occurred when this STN-cortical interaction was delayed. Our results suggest that the STN supports mechanisms of speech planning and auditory-sensorimotor integration during speech production that are required to achieve high fidelity of the phonological and articulatory representation of the target phoneme. These findings establish a framework for understanding cortical-basal ganglia interaction in other human behaviors, and additionally indicate that firing-rate based models are insufficient for explaining basal ganglia circuit behavior.
Collapse
Affiliation(s)
- Matteo Vissani
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Harvard Medical School, Boston, MA, 02115, USA
| | - Alan Bush
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Harvard Medical School, Boston, MA, 02115, USA
| | - Witold J. Lipski
- Department of Neurobiology, Systems Neuroscience Center and Center for Neuroscience, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
| | - Petra Fischer
- School of Physiology, Pharmacology & Neuroscience, University of Bristol, University Walk, BS8 1TD Bristol, United Kingdom
| | - Clemens Neudorfer
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Harvard Medical School, Boston, MA, 02115, USA
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712 USA
| | - Julie A. Fiez
- Department of Psychology, University of Pittsburgh, Pittsburgh 15260, PA, USA
| | - Robert S. Turner
- Department of Neurobiology, Systems Neuroscience Center and Center for Neuroscience, University of Pittsburgh School of Medicine, Pittsburgh, PA, 15213, USA
| | - R. Mark Richardson
- Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA
- Harvard Medical School, Boston, MA, 02115, USA
| |
Collapse
|
5
|
Silva AB, Littlejohn KT, Liu JR, Moses DA, Chang EF. The speech neuroprosthesis. Nat Rev Neurosci 2024; 25:473-492. [PMID: 38745103 DOI: 10.1038/s41583-024-00819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/16/2024]
Abstract
Loss of speech after paralysis is devastating, but circumventing motor-pathway injury by directly decoding speech from intact cortical activity has the potential to restore natural communication and self-expression. Recent discoveries have defined how key features of speech production are facilitated by the coordinated activity of vocal-tract articulatory and motor-planning cortical representations. In this Review, we highlight such progress and how it has led to successful speech decoding, first in individuals implanted with intracranial electrodes for clinical epilepsy monitoring and subsequently in individuals with paralysis as part of early feasibility clinical trials to restore speech. We discuss high-spatiotemporal-resolution neural interfaces and the adaptation of state-of-the-art speech computational algorithms that have driven rapid and substantial progress in decoding neural activity into text, audible speech, and facial movements. Although restoring natural speech is a long-term goal, speech neuroprostheses already have performance levels that surpass communication rates offered by current assistive-communication technology. Given this accelerated rate of progress in the field, we propose key evaluation metrics for speed and accuracy, among others, to help standardize across studies. We finish by highlighting several directions to more fully explore the multidimensional feature space of speech and language, which will continue to accelerate progress towards a clinically viable speech neuroprosthesis.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
6
|
Ozker M, Yu L, Dugan P, Doyle W, Friedman D, Devinsky O, Flinker A. Speech-induced suppression and vocal feedback sensitivity in human cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.12.08.570736. [PMID: 38370843 PMCID: PMC10871232 DOI: 10.1101/2023.12.08.570736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/20/2024]
Abstract
Across the animal kingdom, neural responses in the auditory cortex are suppressed during vocalization, and humans are no exception. A common hypothesis is that suppression increases sensitivity to auditory feedback, enabling the detection of vocalization errors. This hypothesis has been previously confirmed in non-human primates, however a direct link between auditory suppression and sensitivity in human speech monitoring remains elusive. To address this issue, we obtained intracranial electroencephalography (iEEG) recordings from 35 neurosurgical participants during speech production. We first characterized the detailed topography of auditory suppression, which varied across superior temporal gyrus (STG). Next, we performed a delayed auditory feedback (DAF) task to determine whether the suppressed sites were also sensitive to auditory feedback alterations. Indeed, overlapping sites showed enhanced responses to feedback, indicating sensitivity. Importantly, there was a strong correlation between the degree of auditory suppression and feedback sensitivity, suggesting suppression might be a key mechanism that underlies speech monitoring. Further, we found that when participants produced speech with simultaneous auditory feedback, posterior STG was selectively activated if participants were engaged in a DAF paradigm, suggesting that increased attentional load can modulate auditory feedback sensitivity.
Collapse
Affiliation(s)
- Muge Ozker
- Neurology Department, New York University, New York, 10016, NY, USA
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Leyao Yu
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| | - Patricia Dugan
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, 10016, NY, USA
| | - Daniel Friedman
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Orrin Devinsky
- Neurology Department, New York University, New York, 10016, NY, USA
| | - Adeen Flinker
- Neurology Department, New York University, New York, 10016, NY, USA
- Biomedical Engineering Department, New York University, Brooklyn, 11201, NY, USA
| |
Collapse
|
7
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Processing of auditory feedback in perisylvian and insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.14.593257. [PMID: 38798574 PMCID: PMC11118286 DOI: 10.1101/2024.05.14.593257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Alyssa M. Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth C. Tyler-Kabara
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Dave Clarke
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Howard L. Weiner
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Anne E. Anderson
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Robert J. Buchanan
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Pradeep N. Modur
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Liberty S. Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Lead contact
| |
Collapse
|
8
|
Puga TB, Dai HD, Wang Y, Theye E. Maternal Tobacco Use During Pregnancy and Child Neurocognitive Development. JAMA Netw Open 2024; 7:e2355952. [PMID: 38349651 PMCID: PMC10865146 DOI: 10.1001/jamanetworkopen.2023.55952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 12/20/2023] [Indexed: 02/15/2024] Open
Abstract
Importance Maternal tobacco use during pregnancy (MTDP) persists across the globe. Longitudinal assessment of the association of MTDP with neurocognitive development of offspring at late childhood is limited. Objectives To examine whether MTDP is associated with child neurocognitive development at ages 9 to 12 years. Design, Setting, and Participants This cohort study included children aged 9 and 10 years at wave 1 (October 2016 to October 2018) and aged 11 to 12 years at a 2-year follow-up (wave 2, August 2018 to January 2021) across 21 US sites in the Adolescent Brain Cognitive Development (ABCD) Study. Data were analyzed from June 2022 to December 2023. Exposure MTDP. Main Outcomes and Measures Outcomes of interest were neurocognition, measured by the National Institutes of Health (NIH) Toolbox Cognition Battery, and morphometric brain measures through the region of interest (ROI) analysis from structural magnetic resonance imaging (sMRI). Results Among 11 448 children at wave 1 (mean [SD] age, 9.9 [0.6] years; 5990 [52.3%] male), 1607 children were identified with MTDP. In the NIH Toolbox Cognition Battery, children with MTDP (vs no MTDP) exhibited lower scores on the oral reading recognition (mean [SE] B = -1.2 [0.2]; P < .001), picture sequence memory (mean [SE] B = -2.3 [0.6]; P < .001), and picture vocabulary (mean [SE] B = -1.2 [0.3]; P < .001) tests and the crystallized cognition composite score (mean [SE] B = -1.3 [0.3]; P < .001) at wave 1. These differential patterns persisted at wave 2. In sMRI, children with MTDP (vs no MTDP) had smaller cortical areas in precentral (mean [SE] B = -104.2 [30.4] mm2; P = .001), inferior parietal (mean [SE] B = -153.9 [43.4] mm2; P < .001), and entorhinal (mean [SE] B = -25.1 [5.8] mm2; P < .001) regions and lower cortical volumes in precentral (mean [SE] B = -474.4 [98.2] mm3; P < .001), inferior parietal (mean [SE] B = -523.7 [136.7] mm3; P < .001), entorhinal (mean [SE] B = -94.1 [24.5] mm3; P < .001), and parahippocampal (mean [SE] B = -82.6 [18.7] mm3; P < .001) regions at wave 1. Distinct cortical volume patterns continued to be significant at wave 2. Frontal, parietal, and temporal lobes exhibited differential ROI, while there were no notable distinctions in the occipital lobe and insula cortex. Conclusions and Relevance In this cohort study, MTDP was associated with enduring deficits in childhood neurocognition. Continued research on the association of MTDP with cognitive performance and brain structure related to language processing skills and episodic memory is needed.
Collapse
Affiliation(s)
- Troy B. Puga
- College of Public Health, University of Nebraska Medical Center, Omaha
- College of Osteopathic Medicine, Kansas City University, Kansas City, Missouri
| | | | - Yingying Wang
- Neuroimaging for Language, Literacy & Learning Laboratory, University of Nebraska at Lincoln, Lincoln
| | - Elijah Theye
- College of Public Health, University of Nebraska Medical Center, Omaha
| |
Collapse
|
9
|
Neef NE, Chang SE. Knowns and unknowns about the neurobiology of stuttering. PLoS Biol 2024; 22:e3002492. [PMID: 38386639 PMCID: PMC10883586 DOI: 10.1371/journal.pbio.3002492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2024] Open
Abstract
Stuttering occurs in early childhood during a dynamic phase of brain and behavioral development. The latest studies examining children at ages close to this critical developmental period have identified early brain alterations that are most likely linked to stuttering, while spontaneous recovery appears related to increased inter-area connectivity. By contrast, therapy-driven improvement in adults is associated with a functional reorganization within and beyond the speech network. The etiology of stuttering, however, remains enigmatic. This Unsolved Mystery highlights critical questions and points to neuroimaging findings that could inspire future research to uncover how genetics, interacting neural hierarchies, social context, and reward circuitry contribute to the many facets of stuttering.
Collapse
Affiliation(s)
- Nicole E. Neef
- Institute for Diagnostic and Interventional Neuroradiology, University Medical Center Göttingen, Göttingen, Germany
| | - Soo-Eun Chang
- Department of Psychiatry, University of Michigan, Ann Arbor, Michigan, United States of America
- Department of Communication Disorders, Ewha Womans University, Seoul, Korea
| |
Collapse
|
10
|
Meier AM, Guenther FH. Neurocomputational modeling of speech motor development. JOURNAL OF CHILD LANGUAGE 2023; 50:1318-1335. [PMID: 37337871 PMCID: PMC10615680 DOI: 10.1017/s0305000923000260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/21/2023]
Abstract
This review describes a computational approach for modeling the development of speech motor control in infants. We address the development of two levels of control: articulation of individual speech sounds (defined here as phonemes, syllables, or words for which there is an optimized motor program) and production of sound sequences such as phrases or sentences. We describe the DIVA model of speech motor control and its application to the problem of learning individual sounds in the infant's native language. Then we describe the GODIVA model, an extension of DIVA, and how chunking of frequently produced phoneme sequences is implemented within it.
Collapse
Affiliation(s)
- Andrew M Meier
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA02215
| | - Frank H Guenther
- Department of Speech, Language and Hearing Sciences, Boston University, Boston, MA02215
- Department of Biomedical Engineering, Boston University, Boston, MA02215
| |
Collapse
|
11
|
Wang R, Chen X, Khalilian-Gourtani A, Yu L, Dugan P, Friedman D, Doyle W, Devinsky O, Wang Y, Flinker A. Distributed feedforward and feedback cortical processing supports human speech production. Proc Natl Acad Sci U S A 2023; 120:e2300255120. [PMID: 37819985 PMCID: PMC10589651 DOI: 10.1073/pnas.2300255120] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 07/22/2023] [Indexed: 10/13/2023] Open
Abstract
Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.
Collapse
Affiliation(s)
- Ran Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | - Xupeng Chen
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | | | - Leyao Yu
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Patricia Dugan
- Neurology Department, New York University, New York, NY10016
| | - Daniel Friedman
- Neurology Department, New York University, New York, NY10016
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, NY10016
| | - Orrin Devinsky
- Neurology Department, New York University, New York, NY10016
| | - Yao Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Adeen Flinker
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| |
Collapse
|
12
|
Kurteff GL, Lester-Smith RA, Martinez A, Currens N, Holder J, Villarreal C, Mercado VR, Truong C, Huber C, Pokharel P, Hamilton LS. Speaker-induced Suppression in EEG during a Naturalistic Reading and Listening Task. J Cogn Neurosci 2023; 35:1538-1556. [PMID: 37584593 DOI: 10.1162/jocn_a_02037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/17/2023]
Abstract
Speaking elicits a suppressed neural response when compared with listening to others' speech, a phenomenon known as speaker-induced suppression (SIS). Previous research has focused on investigating SIS at constrained levels of linguistic representation, such as the individual phoneme and word level. Here, we present scalp EEG data from a dual speech perception and production task where participants read sentences aloud then listened to playback of themselves reading those sentences. Playback was separated into immediate repetition of the previous trial and randomized repetition of a former trial to investigate if forward modeling of responses during passive listening suppresses the neural response. Concurrent EMG was recorded to control for movement artifact during speech production. In line with previous research, ERP analyses at the sentence level demonstrated suppression of early auditory components of the EEG for production compared with perception. To evaluate whether linguistic abstractions (in the form of phonological feature tuning) are suppressed during speech production alongside lower-level acoustic information, we fit linear encoding models that predicted scalp EEG based on phonological features, EMG activity, and task condition. We found that phonological features were encoded similarly between production and perception. However, this similarity was only observed when controlling for movement by using the EMG response as an additional regressor. Our results suggest that SIS operates at a sensory representational level and is dissociated from higher order cognitive and linguistic processing that takes place during speech perception and production. We also detail some important considerations when analyzing EEG during continuous speech production.
Collapse
|
13
|
Meng K, Goodarzy F, Kim E, Park YJ, Kim JS, Cook MJ, Chung CK, Grayden DB. Continuous synthesis of artificial speech sounds from human cortical surface recordings during silent speech production. J Neural Eng 2023; 20:046019. [PMID: 37459853 DOI: 10.1088/1741-2552/ace7f6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 07/17/2023] [Indexed: 07/28/2023]
Abstract
Objective. Brain-computer interfaces can restore various forms of communication in paralyzed patients who have lost their ability to articulate intelligible speech. This study aimed to demonstrate the feasibility of closed-loop synthesis of artificial speech sounds from human cortical surface recordings during silent speech production.Approach. Ten participants with intractable epilepsy were temporarily implanted with intracranial electrode arrays over cortical surfaces. A decoding model that predicted audible outputs directly from patient-specific neural feature inputs was trained during overt word reading and immediately tested with overt, mimed and imagined word reading. Predicted outputs were later assessed objectively against corresponding voice recordings and subjectively through human perceptual judgments.Main results. Artificial speech sounds were successfully synthesized during overt and mimed utterances by two participants with some coverage of the precentral gyrus. About a third of these sounds were correctly identified by naïve listeners in two-alternative forced-choice tasks. A similar outcome could not be achieved during imagined utterances by any of the participants. However, neural feature contribution analyses suggested the presence of exploitable activation patterns during imagined speech in the postcentral gyrus and the superior temporal gyrus. In future work, a more comprehensive coverage of cortical surfaces, including posterior parts of the middle frontal gyrus and the inferior frontal gyrus, could improve synthesis performance during imagined speech.Significance.As the field of speech neuroprostheses is rapidly moving toward clinical trials, this study addressed important considerations about task instructions and brain coverage when conducting research on silent speech with non-target participants.
Collapse
Affiliation(s)
- Kevin Meng
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
| | - Farhad Goodarzy
- Department of Medicine, St Vincent's Hospital, The University of Melbourne, Melbourne, Australia
| | - EuiYoung Kim
- Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea
| | - Ye Jin Park
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea
| | - June Sic Kim
- Research Institute of Basic Sciences, Seoul National University, Seoul, Republic of Korea
| | - Mark J Cook
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Department of Medicine, St Vincent's Hospital, The University of Melbourne, Melbourne, Australia
| | - Chun Kee Chung
- Department of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea
- Department of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea
| | - David B Grayden
- Department of Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Graeme Clark Institute for Biomedical Engineering, The University of Melbourne, Melbourne, Australia
- Department of Medicine, St Vincent's Hospital, The University of Melbourne, Melbourne, Australia
| |
Collapse
|
14
|
Abbasi O, Steingräber N, Chalas N, Kluger DS, Gross J. Spatiotemporal dynamics characterise spectral connectivity profiles of continuous speaking and listening. PLoS Biol 2023; 21:e3002178. [PMID: 37478152 DOI: 10.1371/journal.pbio.3002178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/31/2023] [Indexed: 07/23/2023] Open
Abstract
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
15
|
Hickok G, Venezia J, Teghipco A. Beyond Broca: neural architecture and evolution of a dual motor speech coordination system. Brain 2023; 146:1775-1790. [PMID: 36746488 PMCID: PMC10411947 DOI: 10.1093/brain/awac454] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Revised: 11/04/2022] [Accepted: 11/19/2022] [Indexed: 02/08/2023] Open
Abstract
Classical neural architecture models of speech production propose a single system centred on Broca's area coordinating all the vocal articulators from lips to larynx. Modern evidence has challenged both the idea that Broca's area is involved in motor speech coordination and that there is only one coordination network. Drawing on a wide range of evidence, here we propose a dual speech coordination model in which laryngeal control of pitch-related aspects of prosody and song are coordinated by a hierarchically organized dorsolateral system while supralaryngeal articulation at the phonetic/syllabic level is coordinated by a more ventral system posterior to Broca's area. We argue further that these two speech production subsystems have distinguishable evolutionary histories and discuss the implications for models of language evolution.
Collapse
Affiliation(s)
- Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA 92697, USA
- Department of Language Science, University of California, Irvine, CA 92697, USA
| | - Jonathan Venezia
- Auditory Research Laboratory, VA Loma Linda Healthcare System, Loma Linda, CA 92357, USA
- Department of Otolaryngology—Head and Neck Surgery, Loma Linda University School of Medicine, Loma Linda, CA 92350, USA
| | - Alex Teghipco
- Department of Psychology, University of South Carolina, Columbia, SC 29208, USA
| |
Collapse
|
16
|
Silva AB, Liu JR, Zhao L, Levy DF, Scott TL, Chang EF. A Neurosurgical Functional Dissection of the Middle Precentral Gyrus during Speech Production. J Neurosci 2022; 42:8416-8426. [PMID: 36351829 PMCID: PMC9665919 DOI: 10.1523/jneurosci.1614-22.2022] [Citation(s) in RCA: 27] [Impact Index Per Article: 13.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2022] Open
Abstract
Classical models have traditionally focused on the left posterior inferior frontal gyrus (Broca's area) as a key region for motor planning of speech production. However, converging evidence suggests that it is not critical for either speech motor planning or execution. Alternative cortical areas supporting high-level speech motor planning have yet to be defined. In this review, we focus on the precentral gyrus, whose role in speech production is often thought to be limited to lower-level articulatory muscle control. In particular, we highlight neurosurgical investigations that have shed light on a cortical region anatomically located near the midpoint of the precentral gyrus, hence called the middle precentral gyrus (midPrCG). The midPrCG is functionally located between dorsal hand and ventral orofacial cortical representations and exhibits unique sensorimotor and multisensory functions relevant for speech processing. This includes motor control of the larynx, auditory processing, as well as a role in reading and writing. Furthermore, direct electrical stimulation of midPrCG can evoke complex movements, such as vocalization, and selective injury can cause deficits in verbal fluency, such as pure apraxia of speech. Based on these findings, we propose that midPrCG is essential to phonological-motoric aspects of speech production, especially syllabic-level speech sequencing, a role traditionally ascribed to Broca's area. The midPrCG is a cortical brain area that should be included in contemporary models of speech production with a unique role in speech motor planning and execution.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Medical Scientist Training Program, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| | - Lingyun Zhao
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Deborah F Levy
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Terri L Scott
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| |
Collapse
|
17
|
Lin K, Jie B, Dong P, Ding X, Bian W, Liu M. Convolutional Recurrent Neural Network for Dynamic Functional MRI Analysis and Brain Disease Identification. Front Neurosci 2022; 16:933660. [PMID: 35873806 PMCID: PMC9298744 DOI: 10.3389/fnins.2022.933660] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 06/13/2022] [Indexed: 12/12/2022] Open
Abstract
Dynamic functional connectivity (dFC) networks derived from resting-state functional magnetic resonance imaging (rs-fMRI) help us understand fundamental dynamic characteristics of human brains, thereby providing an efficient solution for automated identification of brain diseases, such as Alzheimer's disease (AD) and its prodromal stage. Existing studies have applied deep learning methods to dFC network analysis and achieved good performance compared with traditional machine learning methods. However, they seldom take advantage of sequential information conveyed in dFC networks that could be informative to improve the diagnosis performance. In this paper, we propose a convolutional recurrent neural network (CRNN) for automated brain disease classification with rs-fMRI data. Specifically, we first construct dFC networks from rs-fMRI data using a sliding window strategy. Then, we employ three convolutional layers and long short-term memory (LSTM) layer to extract high-level features of dFC networks and also preserve the sequential information of extracted features, followed by three fully connected layers for brain disease classification. Experimental results on 174 subjects with 563 rs-fMRI scans from the Alzheimer's Disease Neuroimaging Initiative (ADNI) demonstrate the effectiveness of our proposed method in binary and multi-category classification tasks.
Collapse
Affiliation(s)
- Kai Lin
- School of Computer and Information, Anhui Normal University, Wuhu, China
| | - Biao Jie
- School of Computer and Information, Anhui Normal University, Wuhu, China
| | - Peng Dong
- School of Computer and Information, Anhui Normal University, Wuhu, China
| | - Xintao Ding
- School of Computer and Information, Anhui Normal University, Wuhu, China
| | - Weixin Bian
- School of Computer and Information, Anhui Normal University, Wuhu, China
| | - Mingxia Liu
- Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
18
|
Mercier MR, Dubarry AS, Tadel F, Avanzini P, Axmacher N, Cellier D, Vecchio MD, Hamilton LS, Hermes D, Kahana MJ, Knight RT, Llorens A, Megevand P, Melloni L, Miller KJ, Piai V, Puce A, Ramsey NF, Schwiedrzik CM, Smith SE, Stolk A, Swann NC, Vansteensel MJ, Voytek B, Wang L, Lachaux JP, Oostenveld R. Advances in human intracranial electroencephalography research, guidelines and good practices. Neuroimage 2022; 260:119438. [PMID: 35792291 DOI: 10.1016/j.neuroimage.2022.119438] [Citation(s) in RCA: 47] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 05/23/2022] [Accepted: 06/30/2022] [Indexed: 12/11/2022] Open
Abstract
Since the second-half of the twentieth century, intracranial electroencephalography (iEEG), including both electrocorticography (ECoG) and stereo-electroencephalography (sEEG), has provided an intimate view into the human brain. At the interface between fundamental research and the clinic, iEEG provides both high temporal resolution and high spatial specificity but comes with constraints, such as the individual's tailored sparsity of electrode sampling. Over the years, researchers in neuroscience developed their practices to make the most of the iEEG approach. Here we offer a critical review of iEEG research practices in a didactic framework for newcomers, as well addressing issues encountered by proficient researchers. The scope is threefold: (i) review common practices in iEEG research, (ii) suggest potential guidelines for working with iEEG data and answer frequently asked questions based on the most widespread practices, and (iii) based on current neurophysiological knowledge and methodologies, pave the way to good practice standards in iEEG research. The organization of this paper follows the steps of iEEG data processing. The first section contextualizes iEEG data collection. The second section focuses on localization of intracranial electrodes. The third section highlights the main pre-processing steps. The fourth section presents iEEG signal analysis methods. The fifth section discusses statistical approaches. The sixth section draws some unique perspectives on iEEG research. Finally, to ensure a consistent nomenclature throughout the manuscript and to align with other guidelines, e.g., Brain Imaging Data Structure (BIDS) and the OHBM Committee on Best Practices in Data Analysis and Sharing (COBIDAS), we provide a glossary to disambiguate terms related to iEEG research.
Collapse
|
19
|
Callan A, Callan DE. Understanding how the human brain tracks emitted speech sounds to execute fluent speech production. PLoS Biol 2022; 20:e3001533. [PMID: 35120143 PMCID: PMC8815871 DOI: 10.1371/journal.pbio.3001533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
Auditory feedback of one's own speech is used to monitor and adaptively control fluent speech production. A new study in PLOS Biology using electrocorticography (ECoG) in listeners whose speech was artificially delayed identifies regions involved in monitoring speech production.
Collapse
Affiliation(s)
- Akiko Callan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka, Japan
| | - Daniel E. Callan
- Neural Information Analysis Laboratories, Advanced Telecommunications Research Institute International, Kyoto, Japan
| |
Collapse
|