1
|
Kurteff GL, Field AM, Asghar S, Tyler-Kabara EC, Clarke D, Weiner HL, Anderson AE, Watrous AJ, Buchanan RJ, Modur PN, Hamilton LS. Processing of auditory feedback in perisylvian and insular cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.14.593257. [PMID: 38798574 PMCID: PMC11118286 DOI: 10.1101/2024.05.14.593257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
When we speak, we not only make movements with our mouth, lips, and tongue, but we also hear the sound of our own voice. Thus, speech production in the brain involves not only controlling the movements we make, but also auditory and sensory feedback. Auditory responses are typically suppressed during speech production compared to perception, but how this manifests across space and time is unclear. Here we recorded intracranial EEG in seventeen pediatric, adolescent, and adult patients with medication-resistant epilepsy who performed a reading/listening task to investigate how other auditory responses are modulated during speech production. We identified onset and sustained responses to speech in bilateral auditory cortex, with a selective suppression of onset responses during speech production. Onset responses provide a temporal landmark during speech perception that is redundant with forward prediction during speech production. Phonological feature tuning in these "onset suppression" electrodes remained stable between perception and production. Notably, the posterior insula responded at sentence onset for both perception and production, suggesting a role in multisensory integration during feedback control.
Collapse
Affiliation(s)
- Garret Lynn Kurteff
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Alyssa M. Field
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
| | - Saman Asghar
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Elizabeth C. Tyler-Kabara
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Dave Clarke
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Pediatrics, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Howard L. Weiner
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Anne E. Anderson
- Department of Pediatrics, Baylor College of Medicine, Houston, TX, USA
| | - Andrew J. Watrous
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, USA
| | - Robert J. Buchanan
- Department of Neurosurgery, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Pradeep N. Modur
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
| | - Liberty S. Hamilton
- Department of Speech, Language, and Hearing Sciences, Moody College of Communication, The University of Texas at Austin, Austin, TX, USA
- Department of Neurology, Dell Medical School, The University of Texas at Austin, Austin, TX, USA
- Lead contact
| |
Collapse
|
2
|
Papadatou-Pastou M, Papadopoulou AK, Samsouris C, Mundorf A, Valtou MM, Ocklenburg S. Hand Preference in Stuttering: Meta-Analyses. Neuropsychol Rev 2023:10.1007/s11065-023-09617-z. [PMID: 37796428 DOI: 10.1007/s11065-023-09617-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Accepted: 08/24/2023] [Indexed: 10/06/2023]
Abstract
Reduced hemispheric asymmetries, as well as their behavioral manifestation in the form of atypical handedness (i.e., non-right, left-, or mixed-handedness), are linked to neurodevelopmental disorders, such as autism spectrum disorder, and several psychiatric disorders, such as schizophrenia. One neurodevelopmental disorder that is associated with reduced hemispheric asymmetries, but for which findings on behavioral laterality are conflicting, is stuttering. Here, we report a series of meta-analyses of studies that report handedness (assessed as hand preference) levels in individuals who stutter (otherwise healthy) compared to controls. For this purpose, articles were identified via a search in PubMed, Scopus, and PsycInfo (13 June 2023). On the basis of k = 52 identified studies totaling n = 2590 individuals who stutter and n = 17,148 controls, five random effects meta-analyses were conducted: four using the odds ratio [left-handers (forced choice); left-handers (extreme); mixed-handers; non-right-handers vs. total)] and one using the standardized difference in means as the effect size. We did not find evidence of a left (extreme)- or mixed-handedness difference or a difference in mean handedness scores, but evidence did emerge, when it came to left-handedness (forced-choice) and (inconclusively for) non-right-handedness. Risk-of-bias analysis was not deemed necessary in the context of these meta-analyses. Differences in hand skill or strength of handedness could not be assessed as no pertinent studies were located. Severity of stuttering could not be used s a moderator, as too few studies broke down their data according to severity. Our findings do not allow for firm conclusions to be drawn on whether stuttering is associated with reduced hemispheric asymmetries, at least when it comes to their behavioral manifestation.
Collapse
Affiliation(s)
- Marietta Papadatou-Pastou
- National and Kapodistrian University of Athens, Athens, Greece.
- Biomedical Research Foundation, Academy of Athens, Athens, Greece.
| | | | - Christos Samsouris
- National and Kapodistrian University of Athens, Athens, Greece
- Biomedical Research Foundation, Academy of Athens, Athens, Greece
| | - Annakarina Mundorf
- Institute for Systems Medicine and Department of Human Medicine, MSH Medical School Hamburg, Hamburg, Germany
| | | | - Sebastian Ocklenburg
- Department of Psychology, Medical School Hamburg, Hamburg, Germany
- ICAN Institute for Cognitive and Affective Neuroscience, Medical School Hamburg, Hamburg, Germany
- Institute of Cognitive Neuroscience, Biopsychology, Department of Psychology, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
3
|
Cuadros J, Z-Rivera L, Castro C, Whitaker G, Otero M, Weinstein A, Martínez-Montes E, Prado P, Zañartu M. DIVA Meets EEG: Model Validation Using Formant-Shift Reflex. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:7512. [PMID: 38435340 PMCID: PMC10906992 DOI: 10.3390/app13137512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
The neurocomputational model 'Directions into Velocities of Articulators' (DIVA) was developed to account for various aspects of normal and disordered speech production and acquisition. The neural substrates of DIVA were established through functional magnetic resonance imaging (fMRI), providing physiological validation of the model. This study introduces DIVA_EEG an extension of DIVA that utilizes electroencephalography (EEG) to leverage the high temporal resolution and broad availability of EEG over fMRI. For the development of DIVA_EEG, EEG-like signals were derived from original equations describing the activity of the different DIVA maps. Synthetic EEG associated with the utterance of syllables was generated when both unperturbed and perturbed auditory feedback (first formant perturbations) were simulated. The cortical activation maps derived from synthetic EEG closely resembled those of the original DIVA model. To validate DIVA_EEG, the EEG of individuals with typical voices (N = 30) was acquired during an altered auditory feedback paradigm. The resulting empirical brain activity maps significantly overlapped with those predicted by DIVA_EEG. In conjunction with other recent model extensions, DIVA_EEG lays the foundations for constructing a complete neurocomputational framework to tackle vocal and speech disorders, which can guide model-driven personalized interventions.
Collapse
Affiliation(s)
- Jhosmary Cuadros
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Grupo de Bioingeniería, Decanato de Investigación, Universidad Nacional Experimental del Táchira, San Cristóbal 5001, Venezuela
| | - Lucía Z-Rivera
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Christian Castro
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Grace Whitaker
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| | - Mónica Otero
- Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, Santiago 8420524, Chile
- Centro Basal Ciencia & Vida, Universidad San Sebastián, Santiago 8580000, Chile
| | - Alejandro Weinstein
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | | | - Pavel Prado
- Escuela de Fonoaudiología, Facultad de Odontología y Ciencias de la Rehabilitación, Universidad San Sebastián, Santiago 7510602, Chile
| | - Matías Zañartu
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| |
Collapse
|
4
|
Terband H, van Brenk F. Modeling Responses to Auditory Feedback Perturbations in Adults, Children, and Children With Complex Speech Sound Disorders: Evidence for Impaired Auditory Self-Monitoring? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1563-1587. [PMID: 37071803 DOI: 10.1044/2023_jslhr-22-00379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE Previous studies have found that typically developing (TD) children were able to compensate and adapt to auditory feedback perturbations to a similar or larger degree compared to young adults, while children with speech sound disorder (SSD) were found to produce predominantly following responses. However, large individual differences lie underneath the group-level results. This study investigates possible mechanisms in responses to formant shifts by modeling parameters of feedback and feedforward control of speech production based on behavioral data. METHOD SimpleDIVA was used to model an existing dataset of compensation/adaptation behavior to auditory feedback perturbations collected from three groups of Dutch speakers: 50 young adults, twenty-three 4- to 8-year-old children with TD speech, and seven 4- to 8-year-old children with SSD. Between-groups and individual within-group differences in model outcome measures representing auditory and somatosensory feedback control gain and feedforward learning rate were assessed. RESULTS Notable between-groups and within-group variation was found for all outcome measures. Data modeled for individual speakers yielded model fits with varying reliability. Auditory feedback control gain was negative in children with SSD and positive in both other groups. Somatosensory feedback control gain was negative for both groups of children and marginally negative for adults. Feedforward learning rate measures were highest in the children with TD speech followed by children with SSD, compared to adults. CONCLUSIONS The SimpleDIVA model was able to account for responses to the perturbation of auditory feedback other than corrective, as negative auditory feedback control gains were associated with following responses to vowel shifts. These preliminary findings are suggestive of impaired auditory self-monitoring in children with complex SSD. Possible mechanisms underlying the nature of following responses are discussed.
Collapse
Affiliation(s)
- Hayo Terband
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Frits van Brenk
- Faculty of Humanities, Department of Languages, Literature and Communication & Institute for Language Sciences, Utrecht University, the Netherlands
- Department of Communicative Disorders and Sciences, University at Buffalo, NY
| |
Collapse
|
5
|
Pérez A, Davis MH, Ince RAA, Zhang H, Fu Z, Lamarca M, Lambon Ralph MA, Monahan PJ. Timing of brain entrainment to the speech envelope during speaking, listening and self-listening. Cognition 2022; 224:105051. [PMID: 35219954 PMCID: PMC9112165 DOI: 10.1016/j.cognition.2022.105051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Revised: 01/24/2022] [Accepted: 01/26/2022] [Indexed: 11/17/2022]
Abstract
This study investigates the dynamics of speech envelope tracking during speech production, listening and self-listening. We use a paradigm in which participants listen to natural speech (Listening), produce natural speech (Speech Production), and listen to the playback of their own speech (Self-Listening), all while their neural activity is recorded with EEG. After time-locking EEG data collection and auditory recording and playback, we used a Gaussian copula mutual information measure to estimate the relationship between information content in the EEG and auditory signals. In the 2-10 Hz frequency range, we identified different latencies for maximal speech envelope tracking during speech production and speech perception. Maximal speech tracking takes place approximately 110 ms after auditory presentation during perception and 25 ms before vocalisation during speech production. These results describe a specific timeline for speech tracking in speakers and listeners in line with the idea of a speech chain and hence, delays in communication.
Collapse
Affiliation(s)
- Alejandro Pérez
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK; Department of Language Studies, University of Toronto Scarborough, Canada; Department of Psychology, University of Toronto Scarborough, Canada.
| | - Matthew H Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, UK
| | - Hanna Zhang
- Department of Language Studies, University of Toronto Scarborough, Canada; Department of Linguistics, University of Toronto, Canada
| | - Zhanao Fu
- Department of Language Studies, University of Toronto Scarborough, Canada; Department of Linguistics, University of Toronto, Canada
| | - Melanie Lamarca
- Department of Language Studies, University of Toronto Scarborough, Canada
| | | | - Philip J Monahan
- Department of Language Studies, University of Toronto Scarborough, Canada; Department of Psychology, University of Toronto Scarborough, Canada
| |
Collapse
|
6
|
Ozker M, Doyle W, Devinsky O, Flinker A. A cortical network processes auditory error signals during human speech production to maintain fluency. PLoS Biol 2022; 20:e3001493. [PMID: 35113857 PMCID: PMC8812883 DOI: 10.1371/journal.pbio.3001493] [Citation(s) in RCA: 16] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2021] [Accepted: 11/24/2021] [Indexed: 01/09/2023] Open
Abstract
Hearing one’s own voice is critical for fluent speech production as it allows for the detection and correction of vocalization errors in real time. This behavior known as the auditory feedback control of speech is impaired in various neurological disorders ranging from stuttering to aphasia; however, the underlying neural mechanisms are still poorly understood. Computational models of speech motor control suggest that, during speech production, the brain uses an efference copy of the motor command to generate an internal estimate of the speech output. When actual feedback differs from this internal estimate, an error signal is generated to correct the internal estimate and update necessary motor commands to produce intended speech. We were able to localize the auditory error signal using electrocorticographic recordings from neurosurgical participants during a delayed auditory feedback (DAF) paradigm. In this task, participants hear their voice with a time delay as they produced words and sentences (similar to an echo on a conference call), which is well known to disrupt fluency by causing slow and stutter-like speech in humans. We observed a significant response enhancement in auditory cortex that scaled with the duration of feedback delay, indicating an auditory speech error signal. Immediately following auditory cortex, dorsal precentral gyrus (dPreCG), a region that has not been implicated in auditory feedback processing before, exhibited a markedly similar response enhancement, suggesting a tight coupling between the 2 regions. Critically, response enhancement in dPreCG occurred only during articulation of long utterances due to a continuous mismatch between produced speech and reafferent feedback. These results suggest that dPreCG plays an essential role in processing auditory error signals during speech production to maintain fluency. Hearing one’s own voice is critical for fluent speech production, allowing detection and correction of vocalization errors in real-time. This study shows that the dorsal precentral gyrus is a critical component of a cortical network that monitors auditory feedback to produce fluent speech; this region is engaged specifically when speech production is effortful during articulation of long utterances.
Collapse
Affiliation(s)
- Muge Ozker
- Department of Neurology, New York University School of Medicine, New York, New York, United States of America
- * E-mail:
| | - Werner Doyle
- Department of Neurosurgery, New York University School of Medicine, New York, New York, United States of America
| | - Orrin Devinsky
- Department of Neurology, New York University School of Medicine, New York, New York, United States of America
| | - Adeen Flinker
- Department of Neurology, New York University School of Medicine, New York, New York, United States of America
- Department of Biomedical Engineering, New York University School of Engineering, New York, New York, United States of America
| |
Collapse
|
7
|
Chon H, Jackson ES, Kraft SJ, Ambrose NG, Loucks TM. Deficit or Difference? Effects of Altered Auditory Feedback on Speech Fluency and Kinematic Variability in Adults Who Stutter. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2539-2556. [PMID: 34153192 PMCID: PMC8632509 DOI: 10.1044/2021_jslhr-20-00606] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 02/07/2021] [Accepted: 03/15/2021] [Indexed: 06/13/2023]
Abstract
Purpose The purpose of this study was to test whether adults who stutter (AWS) display a different range of sensitivity to delayed auditory feedback (DAF). Two experiments were conducted to assess the fluency of AWS under long-latency DAF and to test the effect of short-latency DAF on speech kinematic variability in AWS. Method In Experiment 1, 15 AWS performed a conversational speaking task under nonaltered auditory feedback and 250-ms DAF. The rates of stuttering-like disfluencies, other disfluencies, and speech errors and articulation rate were compared. In Experiment 2, 13 AWS and 15 adults who do not stutter (AWNS) read three utterances under four auditory feedback conditions: nonaltered auditory feedback, amplified auditory feedback, 25-ms DAF, and 50-ms DAF. Across-utterance kinematic variability (spatiotemporal index) and within-utterance variability (percent determinism and stability) were compared between groups. Results In Experiment 1, under 250-ms DAF, the rate of stuttering-like disfluencies and speech errors increased significantly, while articulation rate decreased significantly in AWS. In Experiment 2, AWS exhibited higher kinematic variability than AWNS across the feedback conditions. Under 25-ms DAF, the spatiotemporal index of AWS decreased significantly compared to the other feedback conditions. AWS showed lower overall percent determinism than AWNS, but their percent determinism increased under 50-ms DAF to approximate that of AWNS. Conclusions Auditory feedback manipulations can alter speech fluency and kinematic variability in AWS. Longer latency auditory feedback delays induce speech disruptions, while subtle auditory feedback manipulations potentially benefit speech motor control. Both AWS and AWNS are susceptible to auditory feedback during speech production, but AWS appear to exhibit a distinct continuum of sensitivity.
Collapse
Affiliation(s)
- HeeCheong Chon
- Department of Speech-Language Pathology, Chosun University, Gwangju, South Korea
| | - Eric S. Jackson
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Shelly Jo Kraft
- Department of Communication Sciences and Disorders, Wayne State University, Detroit, MI
| | - Nicoline G. Ambrose
- Department of Speech and Hearing Science, University of Illinois at Urbana–Champaign
| | - Torrey M. Loucks
- Department of Communication Sciences and Disorders, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, Canada
| |
Collapse
|