1
|
Silva AB, Littlejohn KT, Liu JR, Moses DA, Chang EF. The speech neuroprosthesis. Nat Rev Neurosci 2024; 25:473-492. [PMID: 38745103 DOI: 10.1038/s41583-024-00819-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2024] [Indexed: 05/16/2024]
Abstract
Loss of speech after paralysis is devastating, but circumventing motor-pathway injury by directly decoding speech from intact cortical activity has the potential to restore natural communication and self-expression. Recent discoveries have defined how key features of speech production are facilitated by the coordinated activity of vocal-tract articulatory and motor-planning cortical representations. In this Review, we highlight such progress and how it has led to successful speech decoding, first in individuals implanted with intracranial electrodes for clinical epilepsy monitoring and subsequently in individuals with paralysis as part of early feasibility clinical trials to restore speech. We discuss high-spatiotemporal-resolution neural interfaces and the adaptation of state-of-the-art speech computational algorithms that have driven rapid and substantial progress in decoding neural activity into text, audible speech, and facial movements. Although restoring natural speech is a long-term goal, speech neuroprostheses already have performance levels that surpass communication rates offered by current assistive-communication technology. Given this accelerated rate of progress in the field, we propose key evaluation metrics for speed and accuracy, among others, to help standardize across studies. We finish by highlighting several directions to more fully explore the multidimensional feature space of speech and language, which will continue to accelerate progress towards a clinically viable speech neuroprosthesis.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Kaylo T Littlejohn
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
- Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, Berkeley, CA, USA
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - David A Moses
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
- Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
| |
Collapse
|
2
|
Castellucci GA, Kovach CK, Tabasi F, Christianson D, Greenlee JDW, Long MA. Stimulation of caudal inferior and middle frontal gyri disrupts planning during spoken interaction. Curr Biol 2024; 34:2719-2727.e5. [PMID: 38823382 PMCID: PMC11187660 DOI: 10.1016/j.cub.2024.04.080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 03/06/2024] [Accepted: 04/30/2024] [Indexed: 06/03/2024]
Abstract
Turn-taking is a central feature of conversation across languages and cultures.1,2,3,4 This key social behavior requires numerous sensorimotor and cognitive operations1,5,6 that can be organized into three general phases: comprehension of a partner's turn, preparation of a speaker's own turn, and execution of that turn. Using intracranial electrocorticography, we recently demonstrated that neural activity related to these phases is functionally distinct during turn-taking.7 In particular, networks active during the perceptual and articulatory stages of turn-taking consisted of structures known to be important for speech-related sensory and motor processing,8,9,10,11,12,13,14,15,16,17 while putative planning dynamics were most regularly observed in the caudal inferior frontal gyrus (cIFG) and the middle frontal gyrus (cMFG). To test if these structures are necessary for planning during spoken interaction, we used direct electrical stimulation (DES) to transiently perturb cortical function in neurosurgical patient-volunteers performing a question-answer task.7,18,19 We found that stimulating the cIFG and cMFG led to various response errors9,13,20,21 but not gross articulatory deficits, which instead resulted from DES of structures involved in motor control8,13,20,22 (e.g., the precentral gyrus). Furthermore, perturbation of the cIFG and cMFG delayed inter-speaker timing-consistent with slowed planning-while faster responses could result from stimulation of sites located in other areas. Taken together, our findings suggest that the cIFG and cMFG contain critical preparatory circuits that are relevant for interactive language use.
Collapse
Affiliation(s)
- Gregg A Castellucci
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA
| | - Christopher K Kovach
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Farhad Tabasi
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - David Christianson
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Iowa Neuroscience Institute, Iowa City, IA 52242, USA
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY 10016, USA; Center for Neural Science, New York University, New York, NY 10003, USA.
| |
Collapse
|
3
|
Zhou X, Wong PCM. Hyperscanning to explore social interaction among autistic minds. Neurosci Biobehav Rev 2024; 163:105773. [PMID: 38889594 DOI: 10.1016/j.neubiorev.2024.105773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 06/05/2024] [Accepted: 06/13/2024] [Indexed: 06/20/2024]
Abstract
Hyperscanning - the monitoring of brain activity of two or more people simultaneously - has emerged to be a popular tool for assessing neural features of social interaction. This perspective article focuses on hyperscanning studies that use functional near-infrared spectroscopy (fNIRS), a technique that is very conducive to studies requiring naturalistic paradigms. In particular, we are interested in neural features that are related to social interaction deficits among individuals with autism spectrum disorders (ASD). This population has received relatively little attention in research using neuroimaging hyperscanning techniques, compared to neurotypical individuals. The study is outlined as follows. First, we summarize the findings about brain-behavior connections related to autism from previously published fNIRS hyperscanning studies. Then, we propose a preliminary theoretical framework of inter-brain coherence (IBC) with testable hypotheses concerning this population. Finally, we provide two examples of areas of inquiry in which studies could be particularly relevant for social-emotional/behavioral development for autistic children, focusing on intergenerational relationships in family units and learning in classroom settings in mainstream schools.
Collapse
Affiliation(s)
- Xin Zhou
- Brain and Mind Institute, the Chinese University of Hong Kong, Hong Kong Special Administrative Region of China.
| | - Patrick C M Wong
- Brain and Mind Institute, the Chinese University of Hong Kong, Hong Kong Special Administrative Region of China; Department of Linguistics and Modern Languages, the Chinese University of Hong Kong, Hong Kong Special Administrative Region of China
| |
Collapse
|
4
|
Feng J, Lv M, Ma X, Li T, Xu M, Yang J, Su F, Hu R, Li J, Qiu Y, Liu Y, Shen Y, Xu W. Change of function and brain activity in patients of right spastic arm paralysis combined with aphasia after contralateral cervical seventh nerve transfer surgery. Eur J Neurosci 2024. [PMID: 38830753 DOI: 10.1111/ejn.16436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Revised: 05/07/2024] [Accepted: 05/16/2024] [Indexed: 06/05/2024]
Abstract
Left hemisphere injury can cause right spastic arm paralysis and aphasia, and recovery of both motor and language functions shares similar compensatory mechanisms and processes. Contralateral cervical seventh cross transfer (CC7) surgery can provide motor recovery for spastic arm paralysis by triggering interhemispheric plasticity, and self-reports from patients indicate spontaneous improvement in language function but still need to be verified. To explore the improvements in motor and language function after CC7 surgery, we performed this prospective observational cohort study. The Upper Extremity part of Fugl-Meyer scale (UEFM) and Modified Ashworth Scale were used to evaluate motor function, and Aphasia Quotient calculated by Mandarin version of the Western Aphasia Battery (WAB-AQ, larger score indicates better language function) was assessed for language function. In 20 patients included, the average scores of UEFM increased by .40 and 3.70 points from baseline to 1-week and 6-month post-surgery, respectively. The spasticity of the elbow and fingers decreased significantly at 1-week post-surgery, although partially recurred at 6-month follow-up. The average scores of WAB-AQ were increased by 9.14 and 10.69 points at 1-week and 6-month post-surgery (P < .001 for both), respectively. Post-surgical fMRI scans revealed increased activity in the bilateral hemispheres related to language centrals, including the right precentral cortex and right gyrus rectus. These findings suggest that CC7 surgery not only enhances motor function but may also improve the aphasia quotient in patients with right arm paralysis and aphasia due to left hemisphere injuries.
Collapse
Affiliation(s)
- Juntao Feng
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Minzhi Lv
- Department of Biostatistics, School of Public Health, Fudan University, Shanghai, China
| | - Xingyi Ma
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Tie Li
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Miaomiao Xu
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Jingrui Yang
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Fan Su
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Ruiping Hu
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Jie Li
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Yanqun Qiu
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Ying Liu
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
| | - Yundong Shen
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
- Institute of Brain Science, State Key Laboratory of Medical Neurobiology and Collaborative Innovation Center for Brain Science, Fudan University, Shanghai, China
| | - Wendong Xu
- Department of Hand Surgery, Department of Rehabilitation, Jing'an District Central Hospital, branch of Huashan Hospital, the National Clinical Research Center for Aging and Medicine, Fudan University, Shanghai, China
- Department of Biostatistics, School of Public Health, Fudan University, Shanghai, China
- Research Unit of Synergistic Reconstruction of Upper and Lower Limbs After Brain Injury, Chinese Academy of Medical Sciences, Shanghai, China
| |
Collapse
|
5
|
Cai J, Hadjinicolaou AE, Paulk AC, Soper DJ, Xia T, Williams ZM, Cash SS. Natural language processing models reveal neural dynamics of human conversation. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.03.10.531095. [PMID: 36945468 PMCID: PMC10028965 DOI: 10.1101/2023.03.10.531095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/13/2023]
Abstract
Through conversation, humans relay complex information through the alternation of speech production and comprehension. The neural mechanisms that underlie these complementary processes or through which information is precisely conveyed by language, however, remain poorly understood. Here, we used pretrained deep learning natural language processing models in combination with intracranial neuronal recordings to discover neural signals that reliably reflect speech production, comprehension, and their transitions during natural conversation between individuals. Our findings indicate that neural activities that encoded linguistic information were broadly distributed throughout frontotemporal areas across multiple frequency bands. We also find that these activities were specific to the words and sentences being conveyed and that they were dependent on the word's specific context and order. Finally, we demonstrate that these neural patterns partially overlapped during language production and comprehension and that listener-speaker transitions were associated with specific, time-aligned changes in neural activity. Collectively, our findings reveal a dynamical organization of neural activities that subserve language production and comprehension during natural conversation and harness the use of deep learning models in understanding the neural mechanisms underlying human language.
Collapse
Affiliation(s)
- Jing Cai
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Alex E. Hadjinicolaou
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Angelique C. Paulk
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Daniel J. Soper
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Tian Xia
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
| | - Ziv M. Williams
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA
- Harvard Medical School, Program in Neuroscience, Boston, MA
- These authors contributed equally
| | - Sydney S. Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA
- Harvard-MIT Division of Health Sciences and Technology, Boston, MA
- These authors contributed equally
| |
Collapse
|
6
|
Banerjee A, Chen F, Druckmann S, Long MA. Temporal scaling of motor cortical dynamics reveals hierarchical control of vocal production. Nat Neurosci 2024; 27:527-535. [PMID: 38291282 DOI: 10.1038/s41593-023-01556-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 12/13/2023] [Indexed: 02/01/2024]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the male Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (~100 ms), probably representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (~10 s). Using computational modeling, we demonstrate that such temporal scaling, acting through downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY, USA.
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA, USA
| | - Shaul Druckmann
- Department of Neurobiology, Stanford University, Stanford, CA, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY, USA.
- Department of Otolaryngology, New York University Langone Health, New York, NY, USA.
- Center for Neural Science, New York University, New York, NY, USA.
| |
Collapse
|
7
|
Lorca-Puls DL, Gajardo-Vidal A, Mandelli ML, Illán-Gala I, Ezzes Z, Wauters LD, Battistella G, Bogley R, Ratnasiri B, Licata AE, Battista P, García AM, Tee BL, Lukic S, Boxer AL, Rosen HJ, Seeley WW, Grinberg LT, Spina S, Miller BL, Miller ZA, Henry ML, Dronkers NF, Gorno-Tempini ML. Neural basis of speech and grammar symptoms in non-fluent variant primary progressive aphasia spectrum. Brain 2024; 147:607-626. [PMID: 37769652 PMCID: PMC10834255 DOI: 10.1093/brain/awad327] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 07/28/2023] [Accepted: 08/29/2023] [Indexed: 10/03/2023] Open
Abstract
The non-fluent/agrammatic variant of primary progressive aphasia (nfvPPA) is a neurodegenerative syndrome primarily defined by the presence of apraxia of speech (AoS) and/or expressive agrammatism. In addition, many patients exhibit dysarthria and/or receptive agrammatism. This leads to substantial phenotypic variation within the speech-language domain across individuals and time, in terms of both the specific combination of symptoms as well as their severity. How to resolve such phenotypic heterogeneity in nfvPPA is a matter of debate. 'Splitting' views propose separate clinical entities: 'primary progressive apraxia of speech' when AoS occurs in the absence of expressive agrammatism, 'progressive agrammatic aphasia' (PAA) in the opposite case, and 'AOS + PAA' when mixed motor speech and language symptoms are clearly present. While therapeutic interventions typically vary depending on the predominant symptom (e.g. AoS versus expressive agrammatism), the existence of behavioural, anatomical and pathological overlap across these phenotypes argues against drawing such clear-cut boundaries. In the current study, we contribute to this debate by mapping behaviour to brain in a large, prospective cohort of well characterized patients with nfvPPA (n = 104). We sought to advance scientific understanding of nfvPPA and the neural basis of speech-language by uncovering where in the brain the degree of MRI-based atrophy is associated with inter-patient variability in the presence and severity of AoS, dysarthria, expressive agrammatism or receptive agrammatism. Our cross-sectional examination of brain-behaviour relationships revealed three main observations. First, we found that the neural correlates of AoS and expressive agrammatism in nfvPPA lie side by side in the left posterior inferior frontal lobe, explaining their behavioural dissociation/association in previous reports. Second, we identified a 'left-right' and 'ventral-dorsal' neuroanatomical distinction between AoS versus dysarthria, highlighting (i) that dysarthria, but not AoS, is significantly influenced by tissue loss in right-hemisphere motor-speech regions; and (ii) that, within the left hemisphere, dysarthria and AoS map onto dorsally versus ventrally located motor-speech regions, respectively. Third, we confirmed that, within the large-scale grammar network, left frontal tissue loss is preferentially involved in expressive agrammatism and left temporal tissue loss in receptive agrammatism. Our findings thus contribute to define the function and location of the epicentres within the large-scale neural networks vulnerable to neurodegenerative changes in nfvPPA. We propose that nfvPPA be redefined as an umbrella term subsuming a spectrum of speech and/or language phenotypes that are closely linked by the underlying neuroanatomy and neuropathology.
Collapse
Affiliation(s)
- Diego L Lorca-Puls
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Sección de Neurología, Departamento de Especialidades, Facultad de Medicina, Universidad de Concepción, Concepción, 4070105, Chile
| | - Andrea Gajardo-Vidal
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Centro de Investigación en Complejidad Social (CICS), Facultad de Gobierno, Universidad del Desarrollo, Santiago, 7590943, Chile
- Dirección de Investigación y Doctorados, Vicerrectoría de Investigación y Doctorados, Universidad del Desarrollo, Concepción, 4070001, Chile
| | - Maria Luisa Mandelli
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Ignacio Illán-Gala
- Sant Pau Memory Unit, Department of Neurology, Biomedical Research Institute Sant Pau, Hospital de la Santa Creu i Sant Pau, Universitat Autònoma de Barcelona, Barcelona, 08025, Spain
- Centro de Investigación Biomédica en Red de Enfermedades Neurodegenerativas (CIBERNED), Madrid, 28029, Spain
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Zoe Ezzes
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Lisa D Wauters
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Speech, Language and Hearing Sciences, University of Texas, Austin, TX 78712-0114, USA
| | - Giovanni Battistella
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Otolaryngology, Head and Neck Surgery, Massachusetts Eye and Ear and Harvard Medical School, Boston, MA 02114, USA
| | - Rian Bogley
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Buddhika Ratnasiri
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Abigail E Licata
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Petronilla Battista
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Laboratory of Neuropsychology, Istituti Clinici Scientifici Maugeri IRCCS, Bari, 70124, Italy
| | - Adolfo M García
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Centro de Neurociencias Cognitivas, Universidad de San Andrés, Buenos Aires, B1644BID, Argentina
- Departamento de Lingüística y Literatura, Facultad de Humanidades, Universidad de Santiago de Chile, Santiago, 9160000, Chile
| | - Boon Lead Tee
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Sladjana Lukic
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Communication Sciences and Disorders, Ruth S. Ammon College of Education and Health Sciences, Adelphi University, Garden City, NY 11530-0701, USA
| | - Adam L Boxer
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Howard J Rosen
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - William W Seeley
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Department of Pathology, University of California San Francisco, San Francisco, CA 94143, USA
| | - Lea T Grinberg
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
- Department of Pathology, University of California San Francisco, San Francisco, CA 94143, USA
| | - Salvatore Spina
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Bruce L Miller
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
- Global Brain Health Institute, University of California, San Francisco, CA 94143, USA
| | - Zachary A Miller
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| | - Maya L Henry
- Department of Speech, Language and Hearing Sciences, University of Texas, Austin, TX 78712-0114, USA
- Department of Neurology, Dell Medical School, University of Texas, Austin, TX 78712, USA
| | - Nina F Dronkers
- Department of Psychology, University of California, Berkeley, CA 94720, USA
- Department of Neurology, University of California, Davis, CA 95817, USA
| | - Maria Luisa Gorno-Tempini
- Memory and Aging Center, Department of Neurology, UCSF Weill Institute for Neurosciences, University of California, SanFrancisco, CA 94158, USA
| |
Collapse
|
8
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
9
|
Lei VLC, Leong TI, Leong CT, Liu L, Choi CU, Sereno MI, Li D, Huang R. Phase-encoded fMRI tracks down brainstorms of natural language processing with subsecond precision. Hum Brain Mapp 2024; 45:e26617. [PMID: 38339788 PMCID: PMC10858339 DOI: 10.1002/hbm.26617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/04/2023] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
Natural language processing unfolds information overtime as spatially separated, multimodal, and interconnected neural processes. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. Here we have developed rapid phase-encoded designs to fully exploit the temporal information latent in functional magnetic resonance imaging data, as well as overcoming scanner noise and head-motion challenges during overt language tasks. We captured real-time information flows as coherent hemodynamic waves traveling over the cortical surface during listening, reading aloud, reciting, and oral cross-language interpreting tasks. We were able to observe the timing, location, direction, and surge of traveling waves in all language tasks, which were visualized as "brainstorms" on brain "weather" maps. The paths of hemodynamic traveling waves provide direct evidence for dual-stream models of the visual and auditory systems as well as logistics models for crossmodal and cross-language processing. Specifically, we have tracked down the step-by-step processing of written or spoken sentences first being received and processed by the visual or auditory streams, carried across language and domain-general cognitive regions, and finally delivered as overt speeches monitored through the auditory cortex, which gives a complete picture of information flows across the brain during natural language functioning. PRACTITIONER POINTS: Phase-encoded fMRI enables simultaneous imaging of high spatial and temporal resolution, capturing continuous spatiotemporal dynamics of the entire brain during real-time overt natural language tasks. Spatiotemporal traveling wave patterns provide direct evidence for constructing comprehensive and explicit models of human information processing. This study unlocks the potential of applying rapid phase-encoded fMRI to indirectly track the underlying neural information flows of sequential sensory, motor, and high-order cognitive processes.
Collapse
Affiliation(s)
- Victoria Lai Cheng Lei
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Teng Ieng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Cheok Teng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Lili Liu
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Chi Un Choi
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
| | - Martin I. Sereno
- Department of PsychologySan Diego State UniversitySan DiegoCaliforniaUSA
| | - Defeng Li
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Ruey‐Song Huang
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| |
Collapse
|
10
|
Tsunada J, Eliades SJ. Frontal-Auditory Cortical Interactions and Sensory Prediction During Vocal Production in Marmoset Monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577656. [PMID: 38352422 PMCID: PMC10862695 DOI: 10.1101/2024.01.28.577656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. Consistent with this model, recent evidence has demonstrated that the auditory cortex is suppressed immediately before and during vocal production, yet is still sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals containing information about the intended vocal output, potentially originating from motor or other frontal cortical areas. However, whether such frontal areas are the source of suppressive and predictive signaling to the auditory cortex during vocalization is unknown. Here, we simultaneously recorded neural activity from both the auditory and frontal cortices of marmoset monkeys while they produced self-initiated vocalizations. We found increases in neural activity in both brain areas preceding the onset of vocal production, notably changes in both multi-unit activity and local field potential theta-band power. Connectivity analysis using Granger causality demonstrated that frontal cortex sends directed signaling to the auditory cortex during this pre-vocal period. Importantly, this pre-vocal activity predicted both vocalization-induced suppression of the auditory cortex as well as the acoustics of subsequent vocalizations. These results suggest that frontal cortical areas communicate with the auditory cortex preceding vocal production, with frontal-auditory signals that may reflect the transmission of sensory prediction information. This interaction between frontal and auditory cortices may contribute to mechanisms that calculate errors between intended and actual vocal outputs during vocal communication.
Collapse
Affiliation(s)
- Joji Tsunada
- Chinese Institute for Brain Research, Beijing, China
- Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Iwate, Japan
| | - Steven J. Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
11
|
Vitória MA, Fernandes FG, van den Boom M, Ramsey N, Raemaekers M. Decoding Single and Paired Phonemes Using 7T Functional MRI. Brain Topogr 2024:10.1007/s10548-024-01034-6. [PMID: 38261272 DOI: 10.1007/s10548-024-01034-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2023] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
Several studies have shown that mouth movements related to the pronunciation of individual phonemes are represented in the sensorimotor cortex. This would theoretically allow for brain computer interfaces that are capable of decoding continuous speech by training classifiers based on the activity in the sensorimotor cortex related to the production of individual phonemes. To address this, we investigated the decodability of trials with individual and paired phonemes (pronounced consecutively with one second interval) using activity in the sensorimotor cortex. Fifteen participants pronounced 3 different phonemes and 3 combinations of two of the same phonemes in a 7T functional MRI experiment. We confirmed that support vector machine (SVM) classification of single and paired phonemes was possible. Importantly, by combining classifiers trained on single phonemes, we were able to classify paired phonemes with an accuracy of 53% (33% chance level), demonstrating that activity of isolated phonemes is present and distinguishable in combined phonemes. A SVM searchlight analysis showed that the phoneme representations are widely distributed in the ventral sensorimotor cortex. These findings provide insights about the neural representations of single and paired phonemes. Furthermore, it supports the notion that speech BCI may be feasible based on machine learning algorithms trained on individual phonemes using intracranial electrode grids.
Collapse
Affiliation(s)
- Maria Araújo Vitória
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Francisco Guerreiro Fernandes
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Max van den Boom
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Department of Physiology and Biomedical Engineering, Mayo Clinic, Rochester, MN, USA
| | - Nick Ramsey
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Mathijs Raemaekers
- Brain Center Rudolf Magnus, Department of Neurology and Neurosurgery, University Medical Center Utrecht, Utrecht, The Netherlands.
| |
Collapse
|
12
|
Pressman PS, Montembeault M, Matthewson G, Lemieux E, Brusilovsky J, Miller BL, Gorno-Tempini ML, Rankin K, Levenson RW. Conversational turn-taking in frontotemporal dementia and related disorders. J Neurol Neurosurg Psychiatry 2024; 95:197-198. [PMID: 37802638 PMCID: PMC10843648 DOI: 10.1136/jnnp-2023-331389] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Accepted: 09/17/2023] [Indexed: 10/10/2023]
Affiliation(s)
- Peter S Pressman
- Neurology, University of Colorado Anschutz Medical Campus School of Medicine, Aurora, Colorado, USA
| | - Maxime Montembeault
- Neurology, University of California Memory and Aging Center, San Francisco, California, USA
- Department of Psychology, McGill University, Montreal, Quebec, Canada
| | - Gordon Matthewson
- Neurology, University of Colorado Anschutz Medical Campus School of Medicine, Aurora, Colorado, USA
| | - Eric Lemieux
- Medicine, Baylor University Medical Center, Dallas, Texas, USA
| | - Jane Brusilovsky
- Neurology, University of Colorado Anschutz Medical Campus School of Medicine, Aurora, Colorado, USA
| | - Bruce L Miller
- Memory and Aging Center, University of California Memory and Aging Center, San Francisco, California, USA
| | | | - Katherine Rankin
- Neurology, University of California Memory and Aging Center, San Francisco, California, USA
| | | |
Collapse
|
13
|
Assaneo MF, Orpella J. Rhythms in Speech. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2024; 1455:257-274. [PMID: 38918356 DOI: 10.1007/978-3-031-60183-5_14] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/27/2024]
Abstract
Speech can be defined as the human ability to communicate through a sequence of vocal sounds. Consequently, speech requires an emitter (the speaker) capable of generating the acoustic signal and a receiver (the listener) able to successfully decode the sounds produced by the emitter (i.e., the acoustic signal). Time plays a central role at both ends of this interaction. On the one hand, speech production requires precise and rapid coordination, typically within the order of milliseconds, of the upper vocal tract articulators (i.e., tongue, jaw, lips, and velum), their composite movements, and the activation of the vocal folds. On the other hand, the generated acoustic signal unfolds in time, carrying information at different timescales. This information must be parsed and integrated by the receiver for the correct transmission of meaning. This chapter describes the temporal patterns that characterize the speech signal and reviews research that explores the neural mechanisms underlying the generation of these patterns and the role they play in speech comprehension.
Collapse
Affiliation(s)
- M Florencia Assaneo
- Instituto de Neurobiología, Universidad Autónoma de México, Santiago de Querétaro, Mexico.
| | - Joan Orpella
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC, USA
| |
Collapse
|
14
|
Castellucci GA, Kovach CK, Tabasi F, Christianson D, Greenlee JD, Long MA. A frontal cortical network is critical for language planning during spoken interaction. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.26.554639. [PMID: 37693383 PMCID: PMC10491113 DOI: 10.1101/2023.08.26.554639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
Many brain areas exhibit activity correlated with language planning, but the impact of these dynamics on spoken interaction remains unclear. Here we use direct electrical stimulation to transiently perturb cortical function in neurosurgical patient-volunteers performing a question-answer task. Stimulating structures involved in speech motor function evoked diverse articulatory deficits, while perturbations of caudal inferior and middle frontal gyri - which exhibit preparatory activity during conversational turn-taking - led to response errors. Perturbation of the same planning-related frontal regions slowed inter-speaker timing, while faster responses could result from stimulation of sites located in other areas. Taken together, these findings further indicate that caudal inferior and middle frontal gyri constitute a critical planning network essential for interactive language use.
Collapse
|
15
|
Zhang T, Zhou S, Bai X, Zhou F, Zhai Y, Long Y, Lu C. Neurocomputations on dual-brain signals underlie interpersonal prediction during a natural conversation. Neuroimage 2023; 282:120400. [PMID: 37783363 DOI: 10.1016/j.neuroimage.2023.120400] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2023] [Revised: 09/21/2023] [Accepted: 09/30/2023] [Indexed: 10/04/2023] Open
Abstract
Prediction on the partner's speech plays a key role in a smooth conversation. However, previous studies on this issue have been majorly conducted at the single-brain rather than dual-brain level, leaving the interpersonal prediction hypothesis untested. To fill this gap, this study combined a neurocomputational modeling approach with a natural conversation paradigm in which two salespersons persuaded a customer to buy their product with their haemodynamic signals being collected using functional near-infrared spectroscopy hyperscanning. First, the results showed a cognitive hierarchy in a natural conversation, with the lower-level process (i.e., pragmatic representation of the persuasion) in the salesperson interacting with the higher-level process (i.e., value representation of the product) in the customer. Next, we found that the right dorsal lateral prefrontal cortex (rdlPFC) and temporoparietal junction (rTPJ) were associated with the representation of the product's value in the customer, while the right inferior frontal cortex (rIFC) was associated with the representation of the pragmatic processes in the salesperson. Finally, neurocomputational modeling results supported the prediction of the salesperson's lower-level brain activity based on the customer's higher-level brain activity. Moreover, the updating weight of the prediction model based on the neural computation between the rIFC of the salesperson and the rTPJ of the customer was closely associated with the interaction context, whereas that based on the rIFC-rdlPFC was not. In summary, these findings provide initial support for the interpersonal prediction hypothesis at the dual-brain level and reveal a hierarchy for the interpersonal prediction process.
Collapse
Affiliation(s)
- Tengfei Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Siyuan Zhou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Chengdu 610066, PR China
| | - Xialu Bai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Faxin Zhou
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Yu Zhai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Yuhang Long
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Chunming Lu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China.
| |
Collapse
|
16
|
Zhao L, Wang X. Frontal cortex activity during the production of diverse social communication calls in marmoset monkeys. Nat Commun 2023; 14:6634. [PMID: 37857618 PMCID: PMC10587070 DOI: 10.1038/s41467-023-42052-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 09/28/2023] [Indexed: 10/21/2023] Open
Abstract
Vocal communication is essential for social behaviors in humans and non-human primates. While the frontal cortex is crucial to human speech production, its role in vocal production in non-human primates has long been questioned. It is unclear whether activities in the frontal cortex represent diverse vocal signals used in non-human primate communication. Here we studied single neuron activities and local field potentials (LFP) in the frontal cortex of male marmoset monkeys while the animal engaged in vocal exchanges with conspecifics in a social environment. We found that both single neuron activities and LFP were modulated by the production of each of the four major call types. Moreover, neural activities showed distinct patterns for different call types and theta-band LFP oscillations showed phase-locking to the phrases of twitter calls, suggesting a neural representation of vocalization features. Our results suggest important functions of the marmoset frontal cortex in supporting the production of diverse vocalizations in communication.
Collapse
Affiliation(s)
- Lingyun Zhao
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA.
- Department of Neurological Surgery, University of California, San Francisco, CA, 94158, USA.
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, The Johns Hopkins University School of Medicine, Baltimore, MD, 21205, USA.
| |
Collapse
|
17
|
Meier A, Kuzdeba S, Jackson L, Daliri A, Tourville JA, Guenther FH, Greenlee JDW. Lateralization and Time-Course of Cortical Phonological Representations during Syllable Production. eNeuro 2023; 10:ENEURO.0474-22.2023. [PMID: 37739786 PMCID: PMC10561542 DOI: 10.1523/eneuro.0474-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 08/15/2023] [Accepted: 08/28/2023] [Indexed: 09/24/2023] Open
Abstract
Spoken language contains information at a broad range of timescales, from phonetic distinctions on the order of milliseconds to semantic contexts which shift over seconds to minutes. It is not well understood how the brain's speech production systems combine features at these timescales into a coherent vocal output. We investigated the spatial and temporal representations in cerebral cortex of three phonological units with different durations: consonants, vowels, and syllables. Electrocorticography (ECoG) recordings were obtained from five participants while speaking single syllables. We developed a novel clustering and Kalman filter-based trend analysis procedure to sort electrodes into temporal response profiles. A linear discriminant classifier was used to determine how strongly each electrode's response encoded phonological features. We found distinct time-courses of encoding phonological units depending on their duration: consonants were represented more during speech preparation, vowels were represented evenly throughout trials, and syllables during production. Locations of strongly speech-encoding electrodes (the top 30% of electrodes) likewise depended on phonological element duration, with consonant-encoding electrodes left-lateralized, vowel-encoding hemispherically balanced, and syllable-encoding right-lateralized. The lateralization of speech-encoding electrodes depended on onset time, with electrodes active before or after speech production favoring left hemisphere and those active during speech favoring the right. Single-electrode speech classification revealed cortical areas with preferential encoding of particular phonemic elements, including consonant encoding in the left precentral and postcentral gyri and syllable encoding in the right middle frontal gyrus. Our findings support neurolinguistic theories of left hemisphere specialization for processing short-timescale linguistic units and right hemisphere processing of longer-duration units.
Collapse
Affiliation(s)
- Andrew Meier
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Scott Kuzdeba
- Graduate Program for Neuroscience, Boston University, Boston, MA 02215
| | - Liam Jackson
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Ayoub Daliri
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
- College of Health Solutions, Arizona State University, Tempe, AZ 85004
| | - Jason A Tourville
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
| | - Frank H Guenther
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, MA 02215
- Department of Biomedical Engineering, Boston University, Boston, MA 02215
- Department of Radiology, Massachusetts General Hospital, Boston, MA 02215
- Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA 02215
| | - Jeremy D W Greenlee
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242
| |
Collapse
|
18
|
Costalunga G, Carpena CS, Seltmann S, Benichov JI, Vallentin D. Wild nightingales flexibly match whistle pitch in real time. Curr Biol 2023; 33:3169-3178.e3. [PMID: 37453423 PMCID: PMC10414052 DOI: 10.1016/j.cub.2023.06.044] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Revised: 05/10/2023] [Accepted: 06/15/2023] [Indexed: 07/18/2023]
Abstract
Interactive vocal communication, similar to a human conversation, requires flexible and real-time changes to vocal output in relation to preceding auditory stimuli. These vocal adjustments are essential to ensuring both the suitable timing and content of the interaction. Precise timing of dyadic vocal exchanges has been investigated in a variety of species, including humans. In contrast, the ability of non-human animals to accurately adjust specific spectral features of vocalization extemporaneously in response to incoming auditory information is less well studied. One spectral feature of acoustic signals is the fundamental frequency, which we perceive as pitch. Many animal species can discriminate between sound frequencies, but real-time detection and reproduction of an arbitrary pitch have only been observed in humans. Here, we show that nightingales in the wild can match the pitch of whistle songs while singing in response to conspecifics or pitch-controlled whistle playbacks. Nightingales matched whistles across their entire pitch production range indicating that they can flexibly tune their vocal output along a wide continuum. Prompt whistle pitch matches were more precise than delayed ones, suggesting the direct mapping of auditory information onto a motor command to achieve online vocal replication of a heard pitch. Although nightingales' songs follow annual cycles of crystallization and deterioration depending on breeding status, the observed pitch-matching behavior is present year-round, suggesting a stable neural circuit independent of seasonal changes in physiology. Our findings represent the first case of non-human instantaneous vocal imitation of pitch, highlighting a promising model for understanding sensorimotor transformation within an interactive context. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Giacomo Costalunga
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Carolina Sánchez Carpena
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Susanne Seltmann
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Jonathan I Benichov
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany
| | - Daniela Vallentin
- Neural Circuits for Vocal Communication Research Group, Max Planck Institute for Biological Intelligence, Eberhard-Gwinner-Str., Seewiesen 82319, Germany.
| |
Collapse
|
19
|
Abbasi O, Steingräber N, Chalas N, Kluger DS, Gross J. Spatiotemporal dynamics characterise spectral connectivity profiles of continuous speaking and listening. PLoS Biol 2023; 21:e3002178. [PMID: 37478152 DOI: 10.1371/journal.pbio.3002178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/31/2023] [Indexed: 07/23/2023] Open
Abstract
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
20
|
Lei VLC, Leong TI, Leong CT, Liu L, Choi CU, Sereno MI, Li D, Huang RS. Phase-encoded fMRI tracks down brainstorms of natural language processing with sub-second precision. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.29.542546. [PMID: 37398177 PMCID: PMC10312422 DOI: 10.1101/2023.05.29.542546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
The human language system interacts with cognitive and sensorimotor regions during natural language processing. However, where, when, and how these processes occur remain unclear. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. Here we have developed phase-encoded designs to fully exploit the temporal information latent in functional magnetic resonance imaging (fMRI) data, as well as overcoming scanner noise and head-motion challenges during overt language tasks. We captured neural information flows as coherent waves traveling over the cortical surface during listening, reciting, and oral cross-language interpreting. The timing, location, direction, and surge of traveling waves, visualized as 'brainstorms' on brain 'weather' maps, reveal the functional and effective connectivity of the brain in action. These maps uncover the functional neuroanatomy of language perception and production and motivate the construction of finer-grained models of human information processing.
Collapse
Affiliation(s)
| | - Teng Ieng Leong
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Cheok Teng Leong
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Lili Liu
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Chi Un Choi
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Martin I. Sereno
- Department of Psychology, San Diego State University, San Diego, CA, United States
| | - Defeng Li
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| | - Ruey-Song Huang
- Centre for Cognitive and Brain Sciences, University of Macau, Taipa, Macau SAR, China
| |
Collapse
|
21
|
Chu Q, Ma O, Hang Y, Tian X. Dual-stream cortical pathways mediate sensory prediction. Cereb Cortex 2023:7169133. [PMID: 37197767 DOI: 10.1093/cercor/bhad168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2022] [Revised: 04/24/2023] [Accepted: 04/26/2023] [Indexed: 05/19/2023] Open
Abstract
Predictions are constantly generated from diverse sources to optimize cognitive functions in the ever-changing environment. However, the neural origin and generation process of top-down induced prediction remain elusive. We hypothesized that motor-based and memory-based predictions are mediated by distinct descending networks from motor and memory systems to the sensory cortices. Using functional magnetic resonance imaging (fMRI) and a dual imagery paradigm, we found that motor and memory upstream systems activated the auditory cortex in a content-specific manner. Moreover, the inferior and posterior parts of the parietal lobe differentially relayed predictive signals in motor-to-sensory and memory-to-sensory networks. Dynamic causal modeling of directed connectivity revealed selective enabling and modulation of connections that mediate top-down sensory prediction and ground the distinctive neurocognitive basis of predictive processing.
Collapse
Affiliation(s)
- Qian Chu
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, Division of Arts and Sciences, New York University Shanghai, Shanghai 200126, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Max Planck-University of Toronto Centre for Neural Science and Technology, Toronto, ON M5S 2E4, Canada
| | - Ou Ma
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| | - Yuqi Hang
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Department of Administration, Leadership, and Technology, Steinhardt School of Culture, Education, and Human Development, New York University, New York, NY 10003, United States
| | - Xing Tian
- Shanghai Frontiers Science Center of Artificial Intelligence and Deep Learning, Division of Arts and Sciences, New York University Shanghai, Shanghai 200126, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai 200062, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| |
Collapse
|
22
|
Kuhlen AK, Abdel Rahman R. Beyond speaking: neurocognitive perspectives on language production in social interaction. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210483. [PMID: 36871592 PMCID: PMC9985974 DOI: 10.1098/rstb.2021.0483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 12/16/2022] [Indexed: 03/07/2023] Open
Abstract
The human faculty to speak has evolved, so has been argued, for communicating with others and for engaging in social interactions. Hence the human cognitive system should be equipped to address the demands that social interaction places on the language production system. These demands include the need to coordinate speaking with listening, the need to integrate own (verbal) actions with the interlocutor's actions, and the need to adapt language flexibly to the interlocutor and the social context. In order to meet these demands, core processes of language production are supported by cognitive processes that enable interpersonal coordination and social cognition. To fully understand the cognitive architecture and its neural implementation enabling humans to speak in social interaction, our understanding of how humans produce language needs to be connected to our understanding of how humans gain insights into other people's mental states and coordinate in social interaction. This article reviews theories and neurocognitive experiments that make this connection and can contribute to advancing our understanding of speaking in social interaction. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Anna K. Kuhlen
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| | - Rasha Abdel Rahman
- Department of Psychology, Humboldt-Universität zu Berlin, 12489 Berlin, Germany
| |
Collapse
|
23
|
Wiesman AI, Donhauser PW, Degroot C, Diab S, Kousaie S, Fon EA, Klein D, Baillet S. Aberrant neurophysiological signaling associated with speech impairments in Parkinson's disease. NPJ Parkinsons Dis 2023; 9:61. [PMID: 37059749 PMCID: PMC10104849 DOI: 10.1038/s41531-023-00495-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 03/16/2023] [Indexed: 04/16/2023] Open
Abstract
Difficulty producing intelligible speech is a debilitating symptom of Parkinson's disease (PD). Yet, both the robust evaluation of speech impairments and the identification of the affected brain systems are challenging. Using task-free magnetoencephalography, we examine the spectral and spatial definitions of the functional neuropathology underlying reduced speech quality in patients with PD using a new approach to characterize speech impairments and a novel brain-imaging marker. We found that the interactive scoring of speech impairments in PD (N = 59) is reliable across non-expert raters, and better related to the hallmark motor and cognitive impairments of PD than automatically-extracted acoustical features. By relating these speech impairment ratings to neurophysiological deviations from healthy adults (N = 65), we show that articulation impairments in patients with PD are associated with aberrant activity in the left inferior frontal cortex, and that functional connectivity of this region with somatomotor cortices mediates the influence of cognitive decline on speech deficits.
Collapse
Affiliation(s)
- Alex I Wiesman
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Peter W Donhauser
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Clotilde Degroot
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Sabrina Diab
- Department of Psychology, Université du Québec à Montréal, Montréal, QC, Canada
| | - Shanna Kousaie
- School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Edward A Fon
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada
| | - Denise Klein
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada.
- Center for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, 3801 Rue University, Montreal, QC, Canada.
| |
Collapse
|
24
|
Pérez A, Davis MH. Speaking and listening to inter-brain relationships. Cortex 2023; 159:54-63. [PMID: 36608420 DOI: 10.1016/j.cortex.2022.12.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 10/11/2022] [Accepted: 12/06/2022] [Indexed: 12/23/2022]
Abstract
Studies of inter-brain relationships thrive, and yet many reservations regarding their scope and interpretation of these phenomena have been raised by the scientific community. It is thus essential to establish common ground on methodological and conceptual definitions related to this topic and to open debate about any remaining points of uncertainty. We here offer insights to improve the conceptual clarity and empirical standards offered by social neuroscience studies of inter-personal interaction using hyperscanning with a particular focus on verbal communication.
Collapse
Affiliation(s)
- Alejandro Pérez
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK.
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
| |
Collapse
|
25
|
Banerjee A, Chen F, Druckmann S, Long MA. Neural dynamics in the rodent motor cortex enables flexible control of vocal timing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.23.525252. [PMID: 36747850 PMCID: PMC9900850 DOI: 10.1101/2023.01.23.525252] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
Neocortical activity is thought to mediate voluntary control over vocal production, but the underlying neural mechanisms remain unclear. In a highly vocal rodent, the Alston's singing mouse, we investigate neural dynamics in the orofacial motor cortex (OMC), a structure critical for vocal behavior. We first describe neural activity that is modulated by component notes (approx. 100 ms), likely representing sensory feedback. At longer timescales, however, OMC neurons exhibit diverse and often persistent premotor firing patterns that stretch or compress with song duration (approx. 10 s). Using computational modeling, we demonstrate that such temporal scaling, acting via downstream motor production circuits, can enable vocal flexibility. These results provide a framework for studying hierarchical control circuits, a common design principle across many natural and artificial systems.
Collapse
Affiliation(s)
- Arkarup Banerjee
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| | - Feng Chen
- Department of Applied Physics, Stanford University, Stanford, CA 94305, USA
| | - Shaul Druckmann
- Department of Neuroscience, Stanford University, Stanford, CA 94304, USA
| | - Michael A Long
- NYU Neuroscience Institute, New York University Langone Health, New York, NY 10016, USA
- Department of Otolaryngology, New York University Langone Health, New York, NY 10016, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
- Cold Spring Harbor Laboratory, Cold Spring Harbor, NY 11724, USA
| |
Collapse
|
26
|
Bao Q, Zhang Z, Luo H, Tao X. Evaluating and Modeling the Degradation of PLA/PHB Fabrics in Marine Water. Polymers (Basel) 2022; 15:polym15010082. [PMID: 36616431 PMCID: PMC9823644 DOI: 10.3390/polym15010082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Developing degradable bio-plastics has been considered feasible to lessen marine plastic pollution. However, unanimity is still elusive regarding the actual degradability of bio-plastics such as polylactide (PLA) and poly(hydroxybutyrate) (PHB). Thus, herein, we studied the degradability of fabrics made from PLA/PHB blends in marine seawater. The dry-mass percentage of the PLA/PHB fabrics decreased progressively from 100% to 85~90% after eight weeks of immersion. Two environmental aging parameters (UV irradiation and aerating) were also confirmed to accelerate the abiotic hydrolysis of the incubated fabrics. The variation in the molecular structure of the PLA/PHB polymers after the degradation process was investigated by electrospray ionization mass spectrometry (ESI-MS). However, the hydrolysis degradability of bulky PLA/PHB blends, which were used to produce such PLA/PHB fabrics, was negligible under identical conditions. There was no mass loss in these solid PLA/PHB plastics except for a decrease in their tensile strength. Finally, a deep learning artificial neural network model was proposed to model and predict the nonlinear abiotic hydrolysis behavior of PLA/PHB fabrics. The degradability of PLA/PHB fabrics in marine water under the synergistic destructive effects of seawater, UV, and dissolved oxygen provides a pathway for more sustainable textile fibers and apparel products.
Collapse
Affiliation(s)
- Qi Bao
- Research Institute of Intelligent Wearable Systems, The Hong Kong Polytechnic University, Hong Kong 999077, China
- School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Ziheng Zhang
- Research Institute of Intelligent Wearable Systems, The Hong Kong Polytechnic University, Hong Kong 999077, China
- School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Heng Luo
- Research Institute of Intelligent Wearable Systems, The Hong Kong Polytechnic University, Hong Kong 999077, China
- School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong 999077, China
| | - Xiaoming Tao
- Research Institute of Intelligent Wearable Systems, The Hong Kong Polytechnic University, Hong Kong 999077, China
- School of Fashion and Textiles, The Hong Kong Polytechnic University, Hong Kong 999077, China
- Correspondence: ; Tel.: +852-2766-6470; Fax: +852-2766-6470
| |
Collapse
|
27
|
The role of the basal ganglia and cerebellum in adaptation to others' speech rate and rhythm: A study of patients with Parkinson's disease and cerebellar degeneration. Cortex 2022; 157:81-98. [PMID: 36274444 DOI: 10.1016/j.cortex.2022.08.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 06/11/2022] [Accepted: 08/23/2022] [Indexed: 12/15/2022]
Abstract
BACKGROUND Spoken language is constantly undergoing change: Speakers within and across social and regional groups influence each other's speech, leading to the emergence and drifts of accents in a language. These processes are driven by mutual unintentional imitation of the phonetic details of others' speech in conversational interactions, suggesting that continuous auditory-motor adaptation takes place in interactive language use and plasticity of auditory-motor representations of speech persists across the lifespan. The brain mechanisms underlying this large-scale social-linguistic behavior are still poorly understood. RESEARCH AIM To investigate the role of cerebellar and basal ganglia dysfunctions in unintended adaptation to the speech rhythm and articulation rate of a second speaker. METHODS Twelve patients with spinocerebellar ataxia type 6 (SCA6), 15 patients with Parkinson's disease (PD), and 27 neurologically healthy controls (CTRL) participated in two interactive speech tasks, i.e., sentence repetition and "turn-taking" (i.e., dyadic interaction with sentences produced by a model speaker). Production of scripted sentences was used as a control task. Two types of sentence rhythm were distinguished, i.e., regular and irregular, and model speech rate was manipulated in 12 steps between 2.9 and 4.0 syllables per second. Acoustic analyses of the participants' utterances were performed to determine the extent to which participants adapted their speech rate and rhythm to the model. RESULTS Neurologically healthy speakers showed significant adaptation of rate in all conditions, and of rhythm in the repetition task and partly also the turn-taking task. Patients with PD showed a stronger propensity to adapt than the controls. In contrast, the patients with cerebellar degeneration were largely insensitive to the model speaker's rate and rhythm. Contrary to expectations, sentences with an irregular speech rhythm exerted a stronger adaptive attraction than regular sentences in the two patient groups. CONCLUSIONS Cerebellar degeneration inhibits the propensity to covertly adapt to others' speech. Striatal dysfunction in Parkinson's disease spares or even promotes the tendency to accommodate to other speakers' speech rate and rhythm.
Collapse
|
28
|
Silva AB, Liu JR, Zhao L, Levy DF, Scott TL, Chang EF. A Neurosurgical Functional Dissection of the Middle Precentral Gyrus during Speech Production. J Neurosci 2022; 42:8416-8426. [PMID: 36351829 PMCID: PMC9665919 DOI: 10.1523/jneurosci.1614-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Accepted: 08/30/2022] [Indexed: 11/17/2022] Open
Abstract
Classical models have traditionally focused on the left posterior inferior frontal gyrus (Broca's area) as a key region for motor planning of speech production. However, converging evidence suggests that it is not critical for either speech motor planning or execution. Alternative cortical areas supporting high-level speech motor planning have yet to be defined. In this review, we focus on the precentral gyrus, whose role in speech production is often thought to be limited to lower-level articulatory muscle control. In particular, we highlight neurosurgical investigations that have shed light on a cortical region anatomically located near the midpoint of the precentral gyrus, hence called the middle precentral gyrus (midPrCG). The midPrCG is functionally located between dorsal hand and ventral orofacial cortical representations and exhibits unique sensorimotor and multisensory functions relevant for speech processing. This includes motor control of the larynx, auditory processing, as well as a role in reading and writing. Furthermore, direct electrical stimulation of midPrCG can evoke complex movements, such as vocalization, and selective injury can cause deficits in verbal fluency, such as pure apraxia of speech. Based on these findings, we propose that midPrCG is essential to phonological-motoric aspects of speech production, especially syllabic-level speech sequencing, a role traditionally ascribed to Broca's area. The midPrCG is a cortical brain area that should be included in contemporary models of speech production with a unique role in speech motor planning and execution.
Collapse
Affiliation(s)
- Alexander B Silva
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Medical Scientist Training Program, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| | - Jessie R Liu
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| | - Lingyun Zhao
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Deborah F Levy
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Terri L Scott
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California, 94158
- Weill Institute for Neurosciences, University of California, San Francisco, California, 94158
- Graduate Program in Bioengineering, University of California, Berkeley, California 94720, & University of California, San Francisco, California, 94158
| |
Collapse
|
29
|
Castellucci GA, Guenther FH, Long MA. A Theoretical Framework for Human and Nonhuman Vocal Interaction. Annu Rev Neurosci 2022; 45:295-316. [PMID: 35316612 PMCID: PMC9909589 DOI: 10.1146/annurev-neuro-111020-094807] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Vocal communication is a critical feature of social interaction across species; however, the relation between such behavior in humans and nonhumans remains unclear. To enable comparative investigation of this topic, we review the literature pertinent to interactive language use and identify the superset of cognitive operations involved in generating communicative action. We posit these functions comprise three intersecting multistep pathways: (a) the Content Pathway, which selects the movements constituting a response; (b) the Timing Pathway, which temporally structures responses; and (c) the Affect Pathway, which modulates response parameters according to internal state. These processing streams form the basis of the Convergent Pathways for Interaction framework, which provides a conceptual model for investigating the cognitive and neural computations underlying vocal communication across species.
Collapse
Affiliation(s)
- Gregg A. Castellucci
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA
| | - Frank H. Guenther
- Departments of Speech, Language & Hearing Sciences and Biomedical Engineering, Boston University, Boston, MA, USA
| | - Michael A. Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, NY, USA
| |
Collapse
|
30
|
Binding LP, Dasgupta D, Giampiccolo D, Duncan JS, Vos SB. Structure and function of language networks in temporal lobe epilepsy. Epilepsia 2022; 63:1025-1040. [PMID: 35184291 PMCID: PMC9773900 DOI: 10.1111/epi.17204] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Revised: 02/09/2022] [Accepted: 02/16/2022] [Indexed: 12/30/2022]
Abstract
Individuals with temporal lobe epilepsy (TLE) may have significant language deficits. Language capabilities may further decline following temporal lobe resections. The language network, comprising dispersed gray matter regions interconnected with white matter fibers, may be atypical in individuals with TLE. This review explores the structural changes to the language network and the functional reorganization of language abilities in TLE. We discuss the importance of detailed reporting of patient's characteristics, such as, left- and right-sided focal epilepsies as well as lesional and nonlesional pathological subtypes. These factors can affect the healthy functioning of gray and/or white matter. Dysfunction of white matter and displacement of gray matter function could concurrently impact their ability, in turn, producing an interactive effect on typical language organization and function. Surgical intervention can result in impairment of function if the resection includes parts of this structure-function network that are critical to language. In addition, impairment may occur if language function has been reorganized and is included in a resection. Conversely, resection of an epileptogenic zone may be associated with recovery of cortical function and thus improvement in language function. We explore the abnormality of functional regions in a clinically applicable framework and highlight the differences in the underlying language network. Avoidance of language decline following surgical intervention may depend on tailored resections to avoid critical areas of gray matter and their white matter connections. Further work is required to elucidate the plasticity of the language network in TLE and to identify sub-types of language representation, both of which will be useful in planning surgery to spare language function.
Collapse
Affiliation(s)
- Lawrence P. Binding
- Department of Computer ScienceCentre for Medical Image ComputingUniversity College LondonLondonUK
- Department of Clinical and Experimental EpilepsyUCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
| | - Debayan Dasgupta
- Department of Clinical and Experimental EpilepsyUCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
- Victor Horsley Department of NeurosurgeryNational Hospital for Neurology and NeurosurgeryLondonUK
| | - Davide Giampiccolo
- Department of Clinical and Experimental EpilepsyUCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
- Victor Horsley Department of NeurosurgeryNational Hospital for Neurology and NeurosurgeryLondonUK
- Institute of NeuroscienceCleveland Clinic LondonLondonUK
- Department of NeurosurgeryVerona University HospitalUniversity of VeronaVeronaItaly
| | - John S. Duncan
- Department of Clinical and Experimental EpilepsyUCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
| | - Sjoerd B. Vos
- Department of Computer ScienceCentre for Medical Image ComputingUniversity College LondonLondonUK
- Neuroradiological Academic UnitUCL Queen Square Institute of NeurologyUniversity College LondonLondonUK
- Centre for Microscopy, Characterisation, and AnalysisThe University of Western AustraliaNedlandsWestern AustraliaAustralia
| |
Collapse
|
31
|
Banerjee A, Vallentin D. Convergent behavioral strategies and neural computations during vocal turn-taking across diverse species. Curr Opin Neurobiol 2022; 73:102529. [DOI: 10.1016/j.conb.2022.102529] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 02/21/2022] [Accepted: 03/02/2022] [Indexed: 01/20/2023]
|