1
|
Alemi R, Wolfe J, Neumann S, Manning J, Hanna L, Towler W, Wilson C, Bien A, Miller S, Schafer E, Gemignani J, Koirala N, Gracco VL, Deroche M. Motor Processing in Children With Cochlear Implants as Assessed by Functional Near-Infrared Spectroscopy. Percept Mot Skills 2024; 131:74-105. [PMID: 37977135 PMCID: PMC10863375 DOI: 10.1177/00315125231213167] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2023]
Abstract
Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Will Towler
- Hearts for Hearing Foundation, Oklahoma City, OK, USA
| | - Caleb Wilson
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Alexander Bien
- Department of Otolaryngology-Head & Neck Surgery, University of Oklahoma Health Sciences Center, Oklahoma City, OK, USA
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Erin Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton, TX, USA
| | - Jessica Gemignani
- Department of Developmental and Social Psychology, University of Padua, Padova, Italy
| | | | | | - Mickael Deroche
- Department of Psychology, Concordia University, Montreal, QC, Canada
| |
Collapse
|
2
|
Alemi R, Wolfe J, Neumann S, Manning J, Towler W, Koirala N, Gracco VL, Deroche M. Audiovisual integration in children with cochlear implants revealed through EEG and fNIRS. Brain Res Bull 2023; 205:110817. [PMID: 37989460 DOI: 10.1016/j.brainresbull.2023.110817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 09/22/2023] [Accepted: 11/13/2023] [Indexed: 11/23/2023]
Abstract
Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.
Collapse
Affiliation(s)
- Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Oberkotter Foundation, Oklahoma City, OK, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Will Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | | | - Mickael Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| |
Collapse
|
3
|
Koirala N, Deroche MLD, Wolfe J, Neumann S, Bien AG, Doan D, Goldbeck M, Muthuraman M, Gracco VL. Dynamic networks differentiate the language ability of children with cochlear implants. Front Neurosci 2023; 17:1141886. [PMID: 37409105 PMCID: PMC10318154 DOI: 10.3389/fnins.2023.1141886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 05/29/2023] [Indexed: 07/07/2023] Open
Abstract
Background Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.
Collapse
Affiliation(s)
- Nabin Koirala
- Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, United States
| | | | - Jace Wolfe
- Hearts for Hearing Foundation, Oklahoma City, OK, United States
| | - Sara Neumann
- Hearts for Hearing Foundation, Oklahoma City, OK, United States
| | - Alexander G. Bien
- Department of Otolaryngology – Head and Neck Surgery, University of Oklahoma Medical Center, Oklahoma City, OK, United States
| | - Derek Doan
- University of Oklahoma College of Medicine, Oklahoma City, OK, United States
| | - Michael Goldbeck
- University of Oklahoma College of Medicine, Oklahoma City, OK, United States
| | - Muthuraman Muthuraman
- Department of Neurology, Neural Engineering with Signal Analytics and Artificial Intelligence (NESA-AI), Universitätsklinikum Würzburg, Würzburg, Germany
| | - Vincent L. Gracco
- Child Study Center, Yale School of Medicine, Yale University, New Haven, CT, United States
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| |
Collapse
|
4
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
5
|
Heffner CC, Myers EB, Gracco VL. Impaired perceptual phonetic plasticity in Parkinson's disease. J Acoust Soc Am 2022; 152:511. [PMID: 35931533 PMCID: PMC9299957 DOI: 10.1121/10.0012884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 07/05/2022] [Accepted: 07/06/2022] [Indexed: 06/08/2023]
Abstract
Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.
Collapse
Affiliation(s)
- Christopher C Heffner
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
6
|
Gracco VL, Sares AG, Koirala N. Structural brain network topological alterations in stuttering adults. Brain Commun 2022; 4:fcac058. [PMID: 35368614 PMCID: PMC8971894 DOI: 10.1093/braincomms/fcac058] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2021] [Revised: 01/06/2022] [Accepted: 03/08/2022] [Indexed: 11/13/2022] Open
Abstract
Abstract
Persistent developmental stuttering is a speech disorder that primarily affects normal speech fluency but encompasses a complex set of symptoms ranging from reduced sensorimotor integration to socioemotional challenges. Here, we investigated the whole brain structural connectome and its topological alterations in adults who stutter. Diffusion weighted imaging data of 33 subjects (13 adults who stutter and 20 fluent speakers) was obtained along with a stuttering severity evaluation. The structural brain network properties were analyzed using Network-based statistics and graph theoretical measures particularly focusing on community structure, network hubs and controllability. Bayesian power estimation was used to assess the reliability of the structural connectivity differences by examining the effect size. The analysis revealed reliable and wide-spread decreases in connectivity for adults who stutter in regions associated with sensorimotor, cognitive, emotional, and memory-related functions. The community detection algorithms revealed different subnetworks for fluent speakers and adults who stutter, indicating considerable network adaptation in adults who stutter. Average and modal controllability differed between groups in a subnetwork encompassing frontal brain regions and parts of the basal ganglia.
The results revealed extensive structural network alterations and substantial adaptation in neural architecture in adults who stutter well beyond the sensorimotor network. These findings highlight the impact of the neurodevelopmental effects of persistent stuttering on neural organization and the importance of examining the full structural connectome and the network alterations that underscore the behavioral phenotype.
Collapse
Affiliation(s)
- Vincent L. Gracco
- Haskins Laboratories, New Haven, CT, USA
- School of Communication Sciences & Disorders, McGill University, Montreal, Canada
| | | | | |
Collapse
|
7
|
Gilbert AC, Lee JG, Coulter K, Wolpert MA, Kousaie S, Gracco VL, Klein D, Titone D, Phillips NA, Baum SR. Spoken Word Segmentation in First and Second Language: When ERP and Behavioral Measures Diverge. Front Psychol 2021; 12:705668. [PMID: 34603133 PMCID: PMC8485064 DOI: 10.3389/fpsyg.2021.705668] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Accepted: 08/18/2021] [Indexed: 11/24/2022] Open
Abstract
Previous studies of word segmentation in a second language have yielded equivocal results. This is not surprising given the differences in the bilingual experience and proficiency of the participants and the varied experimental designs that have been used. The present study tried to account for a number of relevant variables to determine if bilingual listeners are able to use native-like word segmentation strategies. Here, 61 French-English bilingual adults who varied in L1 (French or English) and language dominance took part in an audiovisual integration task while event-related brain potentials (ERPs) were recorded. Participants listened to sentences built around ambiguous syllable strings (which could be disambiguated based on different word segmentation patterns), during which an illustration was presented on screen. Participants were asked to determine if the illustration was related to the heard utterance or not. Each participant listened to both English and French utterances, providing segmentation patterns that included both their native language (used as reference) and their L2. Interestingly, different patterns of results were observed in the event-related potentials (online) and behavioral (offline) results, suggesting that L2 participants showed signs of being able to adapt their segmentation strategies to the specifics of the L2 (online ERP results), but that the extent of the adaptation varied as a function of listeners' language experience (offline behavioral results).
Collapse
Affiliation(s)
- Annie C Gilbert
- School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada.,Center for Research on Brain, Language and Music, Montréal, QC, Canada
| | - Jasmine G Lee
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, QC, Canada
| | - Kristina Coulter
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Department of Psychology, Concordia University, Montréal, QC, Canada.,Center for Research in Human Development, Montréal, QC, Canada
| | - Max A Wolpert
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Integrated Program in Neuroscience, McGill University, Montréal, QC, Canada
| | - Shanna Kousaie
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Montreal Neurological Institute, McGill University, Montréal, QC, Canada.,School of Psychology, University of Ottawa, Ottawa, ON, Canada
| | - Vincent L Gracco
- School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada.,Haskins Laboratories, Yale University, New Haven, CT, United States
| | - Denise Klein
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Montreal Neurological Institute, McGill University, Montréal, QC, Canada
| | - Debra Titone
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Department of Psychology, McGill University, Montréal, QC, Canada
| | - Natalie A Phillips
- Center for Research on Brain, Language and Music, Montréal, QC, Canada.,Department of Psychology, Concordia University, Montréal, QC, Canada.,Center for Research in Human Development, Montréal, QC, Canada
| | - Shari R Baum
- School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada.,Center for Research on Brain, Language and Music, Montréal, QC, Canada
| |
Collapse
|
8
|
Abstract
Recent studies have demonstrated that the auditory speech perception of a listener can be modulated by somatosensory input applied to the facial skin suggesting that perception is an embodied process. However, speech perception is a multisensory process involving both the auditory and visual modalities. It is unknown whether and to what extent somatosensory stimulation to the facial skin modulates audio-visual speech perception. If speech perception is an embodied process, then somatosensory stimulation applied to the perceiver should influence audio-visual speech processing. Using the McGurk effect (the perceptual illusion that occurs when a sound is paired with the visual representation of a different sound, resulting in the perception of a third sound) we tested the prediction using a simple behavioral paradigm and at the neural level using event-related potentials (ERPs) and their cortical sources. We recorded ERPs from 64 scalp sites in response to congruent and incongruent audio-visual speech randomly presented with and without somatosensory stimulation associated with facial skin deformation. Subjects judged whether the production was /ba/ or not under all stimulus conditions. In the congruent audio-visual condition subjects identifying the sound as /ba/, but not in the incongruent condition consistent with the McGurk effect. Concurrent somatosensory stimulation improved the ability of participants to more correctly identify the production as /ba/ relative to the non-somatosensory condition in both congruent and incongruent conditions. ERP in response to the somatosensory stimulation for the incongruent condition reliably diverged 220 msec after stimulation onset. Cortical sources were estimated around the left anterior temporal gyrus, the right middle temporal gyrus, the right posterior superior temporal lobe and the right occipital region. The results demonstrate a clear multisensory convergence of somatosensory and audio-visual processing in both behavioral and neural processing consistent with the perspective that speech perception is a self-referenced, sensorimotor process.
Collapse
Affiliation(s)
- Takayuki Ito
- University Grenoble-Alpes, CNRS, Grenoble-INP, GIPSA-Lab, Saint Martin D'heres Cedex, France; Haskins Laboratories, New Haven, CT, USA.
| | | | - Vincent L Gracco
- Haskins Laboratories, New Haven, CT, USA; McGill University, Montréal, QC, Canada
| |
Collapse
|
9
|
Ito T, Ohashi H, Gracco VL. Changes of orofacial somatosensory attenuation during speech production. Neurosci Lett 2020; 730:135045. [PMID: 32413541 DOI: 10.1016/j.neulet.2020.135045] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Revised: 04/28/2020] [Accepted: 05/07/2020] [Indexed: 01/30/2023]
Abstract
Modulation of auditory activity occurs before and during voluntary speech movement. However, it is unknown whether orofacial somatosensory input is modulated in the same manner. The current study examined whether or not the somatosensory event-related potentials (ERPs) in response to facial skin stretch are changed during speech and nonspeech production tasks. Specifically, we compared ERP changes to somatosensory stimulation for different orofacial postures and speech utterances. Participants produced three different vowel sounds (voicing) or non-speech oral tasks in which participants maintained a similar posture without voicing. ERP's were recorded from 64 scalp sites in response to the somatosensory stimulation under six task conditions (three vowels × voicing/posture) and compared to a resting baseline condition. The first negative peak for the vowel /u/ was reliably reduced from the baseline in both the voicing and posturing tasks, but the other conditions did not differ. The second positive peak was reduced for all voicing tasks compared to the posturing tasks. The results suggest that the sensitivity of somatosensory ERP to facial skin deformation is modulated by the task and that somatosensory processing during speaking may be modulated differently relative to phonetic identity.
Collapse
Affiliation(s)
- Takayuki Ito
- Grenoble Alpes University, CNRS, Grenoble INP, GIPSA-lab, 11 rue des Mathématiques, Grenoble Campus BP46, F-38402 Saint Martin D'heres Cedex France; Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA.
| | - Hiroki Ohashi
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA
| | - Vincent L Gracco
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA; McGill University, 2001 Avenue McGill College, Montréal, QC H3A 1G1, Canada
| |
Collapse
|
10
|
Sares AG, Deroche MLD, Ohashi H, Shiller DM, Gracco VL. Neural Correlates of Vocal Pitch Compensation in Individuals Who Stutter. Front Hum Neurosci 2020; 14:18. [PMID: 32161525 PMCID: PMC7053555 DOI: 10.3389/fnhum.2020.00018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 01/17/2020] [Indexed: 02/06/2023] Open
Abstract
Stuttering is a disorder that impacts the smooth flow of speech production and is associated with a deficit in sensorimotor integration. In a previous experiment, individuals who stutter were able to vocally compensate for pitch shifts in their auditory feedback, but they exhibited more variability in the timing of their corrective responses. In the current study, we focused on the neural correlates of the task using functional MRI. Participants produced a vowel sound in the scanner while hearing their own voice in real time through headphones. On some trials, the audio was shifted up or down in pitch, eliciting a corrective vocal response. Contrasting pitch-shifted vs. unshifted trials revealed bilateral superior temporal activation over all the participants. However, the groups differed in the activation of middle temporal gyrus and superior frontal gyrus [Brodmann area 10 (BA 10)], with individuals who stutter displaying deactivation while controls displayed activation. In addition to the standard univariate general linear modeling approach, we employed a data-driven technique (independent component analysis, or ICA) to separate task activity into functional networks. Among the networks most correlated with the experimental time course, there was a combined auditory-motor network in controls, but the two networks remained separable for individuals who stuttered. The decoupling of these networks may account for temporal variability in pitch compensation reported in our previous work, and supports the idea that neural network coherence is disturbed in the stuttering brain.
Collapse
Affiliation(s)
- Anastasia G Sares
- Speech Motor Control Lab, Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Centre for Research on Brain, Language, and Music, Montreal, QC, Canada
| | - Mickael L D Deroche
- Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,Laboratory for Hearing and Cognition, Department of Psychology, Concordia University, Montreal, QC, Canada
| | | | - Douglas M Shiller
- Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,École d'orthophonie et d'audiologie, Université de Montréal, Montreal, QC, Canada
| | - Vincent L Gracco
- Speech Motor Control Lab, Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Centre for Research on Brain, Language, and Music, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
11
|
Mollaei F, Shiller DM, Baum SR, Gracco VL. The Relationship Between Speech Perceptual Discrimination and Speech Production in Parkinson's Disease. J Speech Lang Hear Res 2019; 62:4256-4268. [PMID: 31738857 DOI: 10.1044/2019_jslhr-s-18-0425] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Purpose We recently demonstrated that individuals with Parkinson's disease (PD) respond differentially to specific altered auditory feedback parameters during speech production. Participants with PD respond more robustly to pitch and less robustly to formant manipulations compared to control participants. In this study, we investigated whether differences in perceptual processing may in part underlie these compensatory differences in speech production. Methods Pitch and formant feedback manipulations were presented under 2 conditions: production and listening. In the production condition, 15 participants with PD and 15 age- and gender-matched healthy control participants judged whether their own speech output was manipulated in real time. During the listening task, participants judged whether paired tokens of their previously recorded speech samples were the same or different. Results Under listening, 1st formant manipulation discrimination was significantly reduced for the PD group compared to the control group. There was a trend toward better discrimination of pitch in the PD group, but the group difference was not significant. Under the production condition, the ability of participants with PD to identify pitch manipulations was greater than that of the controls. Conclusion The findings suggest perceptual processing differences associated with acoustic parameters of fundamental frequency and 1st formant perturbations in PD. These findings extend our previous results, indicating that different patterns of compensation to pitch and 1st formant shifts may reflect a combination of sensory and motor mechanisms that are differentially influenced by basal ganglia dysfunction.
Collapse
Affiliation(s)
- Fatemeh Mollaei
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
| | - Douglas M Shiller
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- École d'orthophonie et d'audiologie, Université de Montréal, Quebec, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
| | - Vincent L Gracco
- Centre for Research on Brain, Language and Music, Montréal, Quebec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Quebec, Canada
- Haskins Laboratories, New Haven, CT
| |
Collapse
|
12
|
Sengupta R, Yaruss JS, Loucks TM, Gracco VL, Pelczarski K, Nasir SM. Theta Modulated Neural Phase Coherence Facilitates Speech Fluency in Adults Who Stutter. Front Hum Neurosci 2019; 13:394. [PMID: 31798431 PMCID: PMC6878001 DOI: 10.3389/fnhum.2019.00394] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2019] [Accepted: 10/22/2019] [Indexed: 12/03/2022] Open
Abstract
Adults who stutter (AWS) display altered patterns of neural phase coherence within the speech motor system preceding disfluencies. These altered patterns may distinguish fluent speech episodes from disfluent ones. Phase coherence is relevant to the study of stuttering because it reflects neural communication within brain networks. In this follow-up study, the oscillatory cortical dynamics preceding fluent speech in AWS and adults who do not stutter (AWNS) were examined during a single-word delayed reading task using electroencephalographic (EEG) techniques. Compared to AWNS, fluent speech preparation in AWS was characterized by a decrease in theta-gamma phase coherence and a corresponding increase in theta-beta coherence level. Higher spectral powers in the beta and gamma bands were also observed preceding fluent utterances by AWS. Overall, there was altered neural communication during speech planning in AWS that provides novel evidence for atypical allocation of feedforward control by AWS even before fluent utterances.
Collapse
Affiliation(s)
- Ranit Sengupta
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - J Scott Yaruss
- Department of Communicative Sciences and Disorders, Michigan State University, East Lansing, MI, United States
| | - Torrey M Loucks
- Department of Communication Sciences and Disorders, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, AB, Canada.,Institute for Stuttering Treatment and Research, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, AB, Canada
| | | | - Kristin Pelczarski
- School of Family Studies and Human Services, Kansas State University, Manhattan, KS, United States
| | - Sazzad M Nasir
- Haskins Laboratories, New Haven, CT, United States.,Indiana Academy, Ball State University, Muncie, IN, United States
| |
Collapse
|
13
|
Sares AG, Deroche MLD, Shiller DM, Gracco VL. Adults who stutter and metronome synchronization: evidence for a nonspeech timing deficit. Ann N Y Acad Sci 2019; 1449:56-69. [PMID: 31144336 DOI: 10.1111/nyas.14117] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2018] [Revised: 04/16/2019] [Accepted: 04/21/2019] [Indexed: 12/30/2022]
Abstract
Speech timing deficits have been proposed as a causal factor in the disorder of stuttering. The question of whether individuals who stutter have deficits in nonspeech timing is one that has been revisited often, with conflicting results. Here, we uncover subtle differences in a manual metronome synchronization task that included tempo changes with adults who stutter and fluent speakers. We used sensitive circular statistics to examine both asynchrony and consistency in motor production. While both groups displayed a classic negative mean asynchrony (tapping before the beat), individuals who stutter anticipated the beat even more than their fluent peers, and their consistency was particularly affected at slow tempi. Surprisingly, individuals who stutter did not have problems with interval correction at tempo changes. We also examined the influence of music experience on synchronization behavior in both groups. While music perception and training were related to synchronization behavior in fluent participants, these correlations were not present for the stuttering group; however, one measure of stuttering severity (self-rated severity) was negatively correlated with music training. Overall, we found subtle differences in paced auditory-motor synchronization in individuals who stutter, consistent with a timing problem extending to nonspeech.
Collapse
Affiliation(s)
- Anastasia G Sares
- Integrated Program in Neuroscience, Montréal, Quebec, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montréal, Quebec, Canada
| | - Mickael L D Deroche
- School of Communication Sciences and Disorders, Montréal, Quebec, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montréal, Quebec, Canada
| | - Douglas M Shiller
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Quebec, Canada.,École d'orthophonie et d'audiologie, Université de Montréal, Montréal, Quebec, Canada
| | - Vincent L Gracco
- Integrated Program in Neuroscience, Montréal, Quebec, Canada.,School of Communication Sciences and Disorders, Montréal, Quebec, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montréal, Quebec, Canada.,Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
14
|
Bourguignon NJ, Gracco VL. A dual architecture for the cognitive control of language: Evidence from functional imaging and language production. Neuroimage 2019; 192:26-37. [PMID: 30831311 DOI: 10.1016/j.neuroimage.2019.02.043] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2018] [Revised: 02/13/2019] [Accepted: 02/18/2019] [Indexed: 12/28/2022] Open
Abstract
The relation between language processing and the cognitive control of thought and action is a widely debated issue in cognitive neuroscience. While recent research suggests a modular separation between a 'language system' for meaningful linguistic processing and a 'multiple-demand system' for cognitive control, other findings point to more integrated perspectives in which controlled language processing emerges from a division of labor between (parts of) the language system and (parts of) the multiple-demand system. We test here a dual approach to the cognitive control of language predicated on the notion of cognitive control as the combined contribution of a semantic control network (SCN) and a working memory network (WMN) supporting top-down manipulation of (lexico-)semantic information and the monitoring of information in verbal working memory, respectively. We reveal these networks in a large-scale coordinate-based meta-analysis contrasting functional imaging studies of verbal working memory vs. active judgments on (lexico-)semantic information and show the extent of their overlap with the multiple-demand system and the language system. Testing these networks' involvement in a functional imaging study of object naming and verb generation, we then show that SCN specializes in top-down retrieval and selection of (lexico-)semantic representations amongst competing alternatives, while WMN intervenes at a more general level of control modulated in part by the amount of competing responses available for selection. These results have implications in conceptualizing the neurocognitive architecture of language and cognitive control.
Collapse
Affiliation(s)
- Nicolas J Bourguignon
- Psychological Sciences Department, University of Connecticut, Storrs, USA; Haskins Laboratories, New Haven, CT, USA.
| | - Vincent L Gracco
- Haskins Laboratories, New Haven, CT, USA; School of Communication Sciences and Disorders, McGill University, Montreal, Canada.
| |
Collapse
|
15
|
Deroche MLD, Gracco VL. Segregation of voices with single or double fundamental frequencies. J Acoust Soc Am 2019; 145:847. [PMID: 30823786 DOI: 10.1121/1.5090107] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2018] [Accepted: 01/23/2019] [Indexed: 06/09/2023]
Abstract
In cocktail-party situations, listeners can use the fundamental frequency (F0) of a voice to segregate it from competitors, but other cues in speech could help, such as co-modulation of envelopes across frequency or more complex cues related to the semantic/syntactic content of the utterances. For simplicity, this (non-pitch) form of grouping is referred to as "articulatory." By creating a new type of speech with two steady F0s, it was examined how these two forms of segregation compete: articulatory grouping would bind the partials of a double-F0 source together, whereas harmonic segregation would tend to split them in two subsets. In experiment 1, maskers were two same-male sentences. Speech reception thresholds were high in this task (vicinity of 0 dB), and harmonic segregation behaved as though double-F0 stimuli were two independent sources. This was not the case in experiment 2, where maskers were speech-shaped complexes (buzzes). First, double-F0 targets were immune to the masking of a single-F0 buzz matching one of the two target F0s. Second, double-F0 buzzes were particularly effective at masking a single-F0 target matching one of the two buzz F0s. As a conclusion, the strength of F0-segregation appears to depend on whether the masker is speech or not.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University, 3640 rue de la Montagne, Montreal, H3G 2A8, Canada
| | - Vincent L Gracco
- Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511, USA
| |
Collapse
|
16
|
Sares AG, Deroche MLD, Shiller DM, Gracco VL. Timing variability of sensorimotor integration during vocalization in individuals who stutter. Sci Rep 2018; 8:16340. [PMID: 30397215 PMCID: PMC6218511 DOI: 10.1038/s41598-018-34517-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 10/15/2018] [Indexed: 11/09/2022] Open
Abstract
Persistent developmental stuttering affects close to 1% of adults and is thought to be a problem of sensorimotor integration. Previous research has demonstrated that individuals who stutter respond differently to changes in their auditory feedback while speaking. Here we explore a number of changes that accompany alterations in the feedback of pitch during vocal production. Participants sustained the vowel /a/ while hearing on-line feedback of their own voice through headphones. In some trials, feedback was briefly shifted up or down by 100 cents to simulate a vocal production error. As previously shown, participants compensated for the auditory pitch change by altering their vocal production in the opposite direction of the shift. The average compensatory response was smaller for adults who stuttered than for adult controls. Detailed analyses revealed that adults who stuttered had fewer trials with a robust corrective response, and that within the trials showing compensation, the timing of their responses was more variable. These results support the idea that dysfunctional sensorimotor integration in stuttering is characterized by timing variability, reflecting reduced coupling of the auditory and speech motor systems.
Collapse
Affiliation(s)
- Anastasia G Sares
- Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada.
- Centre for Research on Brain, Language, and Music, McGill University, Montréal, QC, Canada.
| | - Mickael L D Deroche
- Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada
- Centre for Research on Brain, Language, and Music, McGill University, Montréal, QC, Canada
| | - Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, QC, Canada
- Centre for Research on Brain, Language, and Music, McGill University, Montréal, QC, Canada
| | - Vincent L Gracco
- Integrated Program in Neuroscience and School of Communication Sciences and Disorders, McGill University, Montréal, QC, Canada
- Haskins Laboratories, New Haven, CT, USA
- Centre for Research on Brain, Language, and Music, McGill University, Montréal, QC, Canada
| |
Collapse
|
17
|
Bourguignon NJ, Ohashi H, Nguyen D, Gracco VL. The neural dynamics of competition resolution for language production in the prefrontal cortex. Hum Brain Mapp 2018; 39:1391-1402. [PMID: 29265695 PMCID: PMC5807142 DOI: 10.1002/hbm.23927] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2017] [Revised: 11/14/2017] [Accepted: 12/11/2017] [Indexed: 12/25/2022] Open
Abstract
Previous research suggests a pivotal role of the prefrontal cortex (PFC) in word selection during tasks of confrontation naming (CN) and verb generation (VG), both of which feature varying degrees of competition between candidate responses. However, discrepancies in prefrontal activity have also been reported between the two tasks, in particular more widespread and intense activation in VG extending into (left) ventrolateral PFC, the functional significance of which remains unclear. We propose that these variations reflect differences in competition resolution processes tied to distinct underlying lexico-semantic operations: Although CN involves selecting lexical entries out of limited sets of alternatives, VG requires exploration of possible semantic relations not readily evident from the object itself, requiring prefrontal areas previously shown to be recruited in top-down retrieval of information from lexico-semantic memory. We tested this hypothesis through combined independent component analysis of functional imaging data and information-theoretic measurements of variations in selection competition associated with participants' performance in overt CN and VG tasks. Selection competition during CN engaged the anterior insula and surrounding opercular tissue, while competition during VG recruited additional activity of left ventrolateral PFC. These patterns remained after controlling for participants' speech onset latencies indicative of possible task differences in mental effort. These findings have implications for understanding the neural-computational dynamics of cognitive control in language production and how it relates to the functional architecture of adaptive behavior.
Collapse
Affiliation(s)
| | | | - Don Nguyen
- Centre for Research on Brain, Language and MusicMcGill UniversityMontrealCanada
| | - Vincent L. Gracco
- Haskins LaboratoriesNew HavenConnecticut
- Centre for Research on Brain, Language and MusicMcGill UniversityMontrealCanada
- School of Communication Sciences and DisordersMcGill UniversityMontrealCanada
| |
Collapse
|
18
|
Misaghi E, Zhang Z, Gracco VL, De Nil LF, Beal DS. White matter tractography of the neural network for speech-motor control in children who stutter. Neurosci Lett 2018; 668:37-42. [PMID: 29309858 DOI: 10.1016/j.neulet.2018.01.009] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2017] [Revised: 12/15/2017] [Accepted: 01/04/2018] [Indexed: 01/31/2023]
Abstract
Stuttering is a neurodevelopmental speech disorder with a phenotype characterized by speech sound repetitions, prolongations and silent blocks during speech production. Developmental stuttering affects 1% of the population and 5% of children. Neuroanatomical abnormalities in the major white matter tracts, including the arcuate fasciculus, corpus callosum, corticospinal, and frontal aslant tracts (FAT), are associated with the disorder in adults who stutter but are less well studied in children who stutter (CWS). We used deterministic tractography to assess the structural connectivity of the neural network for speech production in CWS and controls. CWS had higher fractional anisotropy and axial diffusivity in the right FAT than controls. Our findings support the involvement of the corticostriatal network early in persistent developmental stuttering.
Collapse
Affiliation(s)
- Ehsan Misaghi
- Neuroscience and Mental Health Institute, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, AB, Canada; Institute for Stuttering Treatment and Research, Department of Communication Sciences and Disorders, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, AB, Canada
| | - Zhaoran Zhang
- College of Life Sciences, Sichuan University, Chengdu, Sichuan, China
| | - Vincent L Gracco
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada; Haskins Laboratories, New Haven, CN, USA
| | - Luc F De Nil
- Department of Speech-Language Pathology, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| | - Deryk S Beal
- Department of Speech-Language Pathology, Faculty of Medicine, University of Toronto, Toronto, ON, Canada; Bloorview Research Institute, Holland Bloorview Kids Rehabilitation Hospital, Toronto, ON, Canada.
| |
Collapse
|
19
|
Deroche MLD, Nguyen DL, Gracco VL. Modulation of Speech Motor Learning with Transcranial Direct Current Stimulation of the Inferior Parietal Lobe. Front Integr Neurosci 2017; 11:35. [PMID: 29326563 PMCID: PMC5737029 DOI: 10.3389/fnint.2017.00035] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2017] [Accepted: 11/29/2017] [Indexed: 12/01/2022] Open
Abstract
The inferior parietal lobe (IPL) is a region of the cortex believed to participate in speech motor learning. In this study, we investigated whether transcranial direct current stimulation (tDCS) of the IPL could influence the extent to which healthy adults (1) adapted to a sensory alteration of their own auditory feedback, and (2) changed their perceptual representation. Seventy subjects completed three tasks: a baseline perceptual task that located the phonetic boundary between the vowels /e/ and /a/; a sensorimotor adaptation task in which subjects produced the word “head” under conditions of altered or unaltered feedback; and a post-adaptation perceptual task identical to the first. Subjects were allocated to four groups which differed in current polarity and feedback manipulation. Subjects who received anodal tDCS to their IPL (i.e., presumably increasing cortical excitability) lowered their first formant frequency (F1) by 10% in opposition to the upward shift in F1 in their auditory feedback. Subjects who received the same stimulation with unaltered feedback did not change their production. Subjects who received cathodal tDCS to their IPL (i.e., presumably decreasing cortical excitability) showed a 5% adaptation to the F1 alteration similar to subjects who received sham tDCS. A subset of subjects returned a few days later to reiterate the same protocol but without tDCS, enabling assessment of any facilitatory effects of the previous tDCS. All subjects exhibited a 5% adaptation effect. In addition, across all subjects and for the two recording sessions, the phonetic boundary was shifted toward the vowel /e/ being repeated, consistently with the selective adaptation effect, but a correlation between perception and production suggested that anodal tDCS had enhanced this perceptual shift. In conclusion, we successfully demonstrated that anodal tDCS could (1) enhance the motor adaptation to a sensory alteration, and (2) potentially affect the perceptual representation of those sounds, but we failed to demonstrate the reverse effect with the cathodal configuration. Overall, tDCS of the left IPL can be used to enhance speech performance but only under conditions in which new or adaptive learning is required.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Don L Nguyen
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Vincent L Gracco
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| |
Collapse
|
20
|
Deroche MLD, Limb CJ, Chatterjee M, Gracco VL. Similar abilities of musicians and non-musicians to segregate voices by fundamental frequency. J Acoust Soc Am 2017; 142:1739. [PMID: 29092612 PMCID: PMC5626570 DOI: 10.1121/1.5005496] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2017] [Revised: 09/08/2017] [Accepted: 09/12/2017] [Indexed: 06/07/2023]
Abstract
Musicians can sometimes achieve better speech recognition in noisy backgrounds than non-musicians, a phenomenon referred to as the "musician advantage effect." In addition, musicians are known to possess a finer sense of pitch than non-musicians. The present study examined the hypothesis that the latter fact could explain the former. Four experiments measured speech reception threshold for a target voice against speech or non-speech maskers. Although differences in fundamental frequency (ΔF0s) were shown to be beneficial even when presented to opposite ears (experiment 1), the authors' attempt to maximize their use by directing the listener's attention to the target F0 led to unexpected impairments (experiment 2) and the authors' attempt to hinder their use by generating uncertainty about the competing F0s led to practically negligible effects (experiments 3 and 4). The benefits drawn from ΔF0s showed surprisingly little malleability for a cue that can be used in the complete absence of energetic masking. In half of the experiments, musicians obtained better thresholds than non-musicians, particularly in speech-on-speech conditions, but they did not reliably obtain larger ΔF0 benefits. Thus, the data do not support the hypothesis that the musician advantage effect is based on greater ability to exploit ΔF0s.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University, 3640 rue de la Montagne, Montreal H3G 2A8, Canada
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, 2233 Post Street, San Francisco, California 94115, USA
| | - Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Vincent L Gracco
- Haskins Laboratories, 300 George Street, New Haven, Connecticut 06511, USA
| |
Collapse
|
21
|
van de Vorst R, Gracco VL. Atypical non-verbal sensorimotor synchronization in adults who stutter may be modulated by auditory feedback. J Fluency Disord 2017; 53:14-25. [PMID: 28870331 DOI: 10.1016/j.jfludis.2017.05.004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2016] [Revised: 04/13/2017] [Accepted: 05/19/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE To investigate if non-verbal sensorimotor synchronization abilities in adult individuals who stutter (IWS) differ from non-stuttering controls (NS) under various performance conditions (tempo, auditory feedback, use of hands [single/both] and rhythm). METHODS Participants were 11 IWS (5 males, 6 females, Mean age=25.8, SD=8.7) and 11 age- and gender-matched controls (Mean age=24.4, SD=8.4). During the experiment, participants were asked to prepare three melodies and subsequently perform them with a metronome at different rates and auditory feedback modalities (non-altered and suppressed). For each task/condition we tracked timing asynchrony related to the steady metronome beat. RESULTS AND CONCLUSIONS Overall, IWS displayed significantly higher timing asynchrony. Of all conditions, auditory-feedback distinguished IWS from NS most strongly, a subgroup of IWS significantly benefitting from the absence of auditory feedback. In addition, IWS showed a non-significant trend of higher negative mean asynchrony (NMA) and were more affected by the slower rate and increased rhythmic complexity and occasionally suggested poorer beat perception. These results suggest aberrant timing of sensorimotor network interaction associated with the origin of developmental stuttering.
Collapse
Affiliation(s)
- Robert van de Vorst
- Centre for Research on Brain, Language and Music, 3640 rue de la Montagne, Montreal, H3G 2A8, Canada; School of Communication Sciences and Disorders, McGill University, 2001 McGill College Avenue, Montreal, Quebec, H3A 1G1, Canada.
| | - Vincent L Gracco
- Centre for Research on Brain, Language and Music, 3640 rue de la Montagne, Montreal, H3G 2A8, Canada; School of Communication Sciences and Disorders, McGill University, 2001 McGill College Avenue, Montreal, Quebec, H3A 1G1, Canada; Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA.
| |
Collapse
|
22
|
van den Bunt MR, Groen MA, Ito T, Francisco AA, Gracco VL, Pugh KR, Verhoeven L. Increased Response to Altered Auditory Feedback in Dyslexia: A Weaker Sensorimotor Magnet Implied in the Phonological Deficit. J Speech Lang Hear Res 2017; 60:654-667. [PMID: 28257585 PMCID: PMC5544192 DOI: 10.1044/2016_jslhr-l-16-0201] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2016] [Revised: 07/26/2016] [Accepted: 09/05/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE The purpose of this study was to examine whether developmental dyslexia (DD) is characterized by deficiencies in speech sensory and motor feedforward and feedback mechanisms, which are involved in the modulation of phonological representations. METHOD A total of 42 adult native speakers of Dutch (22 adults with DD; 20 participants who were typically reading controls) were asked to produce /bep/ while the first formant (F1) of the /e/ was not altered (baseline), increased (ramp), held at maximal perturbation (hold), and not altered again (after-effect). The F1 of the produced utterance was measured for each trial and used for statistical analyses. The measured F1s produced during each phase were entered in a linear mixed-effects model. RESULTS Participants with DD adapted more strongly during the ramp phase and returned to baseline to a lesser extent when feedback was back to normal (after-effect phase) when compared with the typically reading group. In this study, a faster deviation from baseline during the ramp phase, a stronger adaptation response during the hold phase, and a slower return to baseline during the after-effect phase were associated with poorer reading and phonological abilities. CONCLUSION The data of the current study are consistent with the notion that the phonological deficit in DD is associated with a weaker sensorimotor magnet for phonological representations.
Collapse
Affiliation(s)
- Mark R van den Bunt
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - Margriet A Groen
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - Takayuki Ito
- Haskins Laboratories, Yale University, New Haven, CTUniversité Grenoble Alpes, GIPSA-Lab, Grenoble, FranceCentre National de la Recherche Scientifique (CNRS), Grenoble Images Parole Signal Automatique (GIPSA) Lab, Grenoble, France
| | - Ana A Francisco
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| | - Vincent L Gracco
- Haskins Laboratories, Yale University, New Haven, CTCentre for Research on Brain, Language & Music, McGill University, Montréal, Canada
| | - Ken R Pugh
- Haskins Laboratories, Yale University, New Haven, CT
| | - Ludo Verhoeven
- Behavioural Science Institute, Radboud University, Nijmegen, the Netherlands
| |
Collapse
|
23
|
Brajot FX, Shiller DM, Gracco VL. Autophonic loudness perception in Parkinson's disease. J Acoust Soc Am 2016; 139:1364-71. [PMID: 27036273 PMCID: PMC4818272 DOI: 10.1121/1.4944569] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/14/2015] [Revised: 02/14/2016] [Accepted: 03/05/2016] [Indexed: 05/27/2023]
Abstract
The relationship between the intensity and loudness of self-generated (autophonic) speech remains invariant despite changes in auditory feedback, indicating that non-auditory processes contribute to this form of perception. The aim of the current study was to determine if the speech perception deficit associated with Parkinson's disease may be linked to deficits in such processes. Loudness magnitude estimates were obtained from parkinsonian and non-parkinsonian subjects across four separate conditions: self-produced speech under normal, perturbed, and masked auditory feedback, as well as auditory presentation of pre-recorded speech (passive listening). Slopes and intercepts of loudness curves were compared across groups and conditions. A significant difference in slope was found between autophonic and passive-listening conditions for both groups. Unlike control subjects, parkinsonian subjects' magnitude estimates under auditory masking increased in variability and did not show as strong a shift in intercept values. These results suggest that individuals with Parkinson's disease rely on auditory feedback to compensate for underlying deficits in sensorimotor integration important in establishing and regulating autophonic loudness.
Collapse
Affiliation(s)
- François-Xavier Brajot
- School of Communication Sciences and Disorders, McGill University, 1266 Pine Avenue West, Montréal, Québec H3G 1A8, Canada
| | - Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, Montréal, 7077 avenue du Parc, Montréal, Québec H3N 1X7, Canada
| | - Vincent L Gracco
- School of Communication Sciences and Disorders, McGill University, 1266 Pine Avenue West, Montréal, Québec H3G 1A8, Canada
| |
Collapse
|
24
|
Berken JA, Gracco VL, Chen JK, Watkins KE, Baum S, Callahan M, Klein D. Corrigendum to "Neural activation in speech production and reading aloud in native and non-native languages" [J. Neuroimage 112 (2015) 208-217]. Neuroimage 2016; 125:1175. [PMID: 28800682 DOI: 10.1016/j.neuroimage.2015.11.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Affiliation(s)
- Jonathan A Berken
- Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, QC, Canada; Centre for Research on Brain, Language, and Music, McGill University, Montreal, QC, Canada
| | - Vincent L Gracco
- Centre for Research on Brain, Language, and Music, McGill University, Montreal, QC, Canada; Haskins Laboratories, New Haven, CT, USA
| | - Jen-Kai Chen
- Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, QC, Canada
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, UK
| | - Shari Baum
- Centre for Research on Brain, Language, and Music, McGill University, Montreal, QC, Canada
| | - Megan Callahan
- Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, QC, Canada; Centre for Research on Brain, Language, and Music, McGill University, Montreal, QC, Canada
| | - Denise Klein
- Cognitive Neuroscience Unit, Montreal Neurological Institute, Montreal, QC, Canada; Centre for Research on Brain, Language, and Music, McGill University, Montreal, QC, Canada
| |
Collapse
|
25
|
Abstract
Cortical processing associated with orofacial somatosensory function in speech has received limited experimental attention due to the difficulty of providing precise and controlled stimulation. This article introduces a technique for recording somatosensory event-related potentials (ERP) that uses a novel mechanical stimulation method involving skin deformation using a robotic device. Controlled deformation of the facial skin is used to modulate kinesthetic inputs through excitation of cutaneous mechanoreceptors. By combining somatosensory stimulation with electroencephalographic recording, somatosensory evoked responses can be successfully measured at the level of the cortex. Somatosensory stimulation can be combined with the stimulation of other sensory modalities to assess multisensory interactions. For speech, orofacial stimulation is combined with speech sound stimulation to assess the contribution of multi-sensory processing including the effects of timing differences. The ability to precisely control orofacial somatosensory stimulation during speech perception and speech production with ERP recording is an important tool that provides new insight into the neural organization and neural representations for speech.
Collapse
Affiliation(s)
- Takayuki Ito
- Haskins Laboratories; Speech and Cognition Department, Gipsa-lab, CNRS; Univ. Grenoble-Alpes;
| | - David J Ostry
- Haskins Laboratories; Department of Psychology, McGill University
| | - Vincent L Gracco
- Haskins Laboratories; School of Communication Science and Disorders, McGill University
| |
Collapse
|
26
|
Deschamps I, Baum SR, Gracco VL. Phonological processing in speech perception: What do sonority differences tell us? Brain Lang 2015; 149:77-83. [PMID: 26186232 DOI: 10.1016/j.bandl.2015.06.008] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2014] [Revised: 06/15/2015] [Accepted: 06/16/2015] [Indexed: 06/04/2023]
Abstract
Previous research has associated the inferior frontal and posterior temporal brain regions with a number of phonological processes. In order to identify how these specific brain regions contribute to phonological processing, we manipulated subsyllabic phonological complexity and stimulus modality during speech perception using fMRI. Subjects passively attended to visual or auditory pseudowords. Similar to previous studies, a bilateral network of cortical regions was recruited during the presentation of visual and auditory stimuli. Moreover, pseudowords recruited a similar network of regions as words and letters. Few regions in the whole-brain results revealed neural processing differences associated with phonological complexity independent of modality of presentation. In an ROI analysis, the only region sensitive to phonological complexity was the posterior part of the inferior frontal gyrus (IFGpo), with the complexity effect only present for print. In sum, the sensitivity of phonological brain areas depends on the modality of stimulus presentation and task demands.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada; Rehabilitation Department, Laval University, Quebec, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec, Quebec, QC, Canada.
| | - Shari R Baum
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Quebec H3G 1A8, Canada; Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada
| | - Vincent L Gracco
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Quebec H3G 1A8, Canada; Centre for Research on Brain, Language and Music, Rabinovitch House, McGill University, 3640 rue de la Montagne, Montreal, Quebec H3G 2A8, Canada; Haskins Laboratories, 300 George St., Suite 900, New Haven, CT 06511, USA
| |
Collapse
|
27
|
Beal DS, Lerch JP, Cameron B, Henderson R, Gracco VL, De Nil LF. The trajectory of gray matter development in Broca's area is abnormal in people who stutter. Front Hum Neurosci 2015; 9:89. [PMID: 25784869 PMCID: PMC4347452 DOI: 10.3389/fnhum.2015.00089] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Accepted: 02/04/2015] [Indexed: 11/13/2022] Open
Abstract
The acquisition and mastery of speech-motor control requires years of practice spanning the course of development. People who stutter often perform poorly on speech-motor tasks thereby calling into question their ability to establish the stable neural motor programs required for masterful speech-motor control. There is evidence to support the assertion that these neural motor programs are represented in the posterior part of Broca’s area, specifically the left pars opercularis. Consequently, various theories of stuttering causation posit that the disorder is related to a breakdown in the formation of the neural motor programs for speech early in development and that this breakdown is maintained throughout life. To date, no study has examined the potential neurodevelopmental signatures of the disorder across pediatric and adult populations. The current study aimed to fill this gap in our knowledge. We hypothesized that the developmental trajectory of cortical thickness in people who stutter would differ across the lifespan in the left pars opercularis relative to a group of control participants. We collected structural magnetic resonance images from 116 males (55 people who stutter) ranging in age from 6 to 48 years old. Differences in cortical thickness across ages and between patients and controls were investigated in 30 brain regions previously implicated in speech-motor control. An interaction between age and group was found for the left pars opercularis only. In people who stutter, the pars opercularis did not demonstrate the typical maturational pattern of gradual gray matter thinning with age across the lifespan that we observed in control participants. In contrast, the developmental trajectory of gray matter thickness in other regions of interest within the neural network for speech-motor control was similar for both groups. Our findings indicate that the developmental trajectory of gray matter in left pars opercularis is abnormal in people who stutter.
Collapse
Affiliation(s)
- Deryk S Beal
- Department of Communication Sciences and Disorders and the Institute for Stuttering Treatment and Research, Faculty of Rehabilitation Medicine, University of Alberta Edmonton, AB, Canada ; Neuroscience and Mental Health Institute, University of Alberta Edmonton, AB, Canada
| | - Jason P Lerch
- Program in Neuroscience and Mental Health, The Hospital for Sick Children Toronto, ON, Canada ; Department of Medical Biophysics, University of Toronto Toronto, ON, Canada
| | - Brodie Cameron
- Department of Communication Sciences and Disorders and the Institute for Stuttering Treatment and Research, Faculty of Rehabilitation Medicine, University of Alberta Edmonton, AB, Canada
| | - Rhaeling Henderson
- Department of Communication Sciences and Disorders and the Institute for Stuttering Treatment and Research, Faculty of Rehabilitation Medicine, University of Alberta Edmonton, AB, Canada
| | - Vincent L Gracco
- Haskins Laboratories New Haven, CT, USA ; Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - Luc F De Nil
- Department of Speech-Language Pathology, University of Toronto Toronto, ON, Canada
| |
Collapse
|
28
|
Ito T, Gracco VL, Ostry DJ. Temporal factors affecting somatosensory-auditory interactions in speech processing. Front Psychol 2014; 5:1198. [PMID: 25452733 PMCID: PMC4233986 DOI: 10.3389/fpsyg.2014.01198] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 10/04/2014] [Indexed: 12/03/2022] Open
Abstract
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Collapse
Affiliation(s)
| | - Vincent L Gracco
- Haskins Laboratories, New Haven , CT, USA ; McGill University, Montréal , QC, Canada
| | - David J Ostry
- Haskins Laboratories, New Haven , CT, USA ; McGill University, Montréal , QC, Canada
| |
Collapse
|
29
|
Klepousniotou E, Gracco VL, Pike GB. Pathways to lexical ambiguity: fMRI evidence for bilateral fronto-parietal involvement in language processing. Brain Lang 2014; 131:56-64. [PMID: 24183467 DOI: 10.1016/j.bandl.2013.06.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2013] [Accepted: 06/19/2013] [Indexed: 06/02/2023]
Abstract
Numerous functional neuroimaging studies reported increased activity in the pars opercularis and the pars triangularis (Brodmann's areas 44 and 45) of the left hemisphere during the performance of linguistic tasks. The role of these areas in the right hemisphere in language processing is not understood and, although there is evidence from lesion studies that the right hemisphere is involved in the appreciation of semantic relations, no specific anatomical substrate has yet been identified. This event-related functional magnetic resonance imaging study compared brain activity during the performance of language processing trials in which either dominant or subordinate meaning activation of ambiguous words was required. The results show that the ventral part of the pars opercularis both in the left and the right hemisphere is centrally involved in language processing. In addition, they highlight the bilateral co-activation of this region with the supramarginal gyrus of the inferior parietal lobule during the processing of this type of linguistic material. This study, thus, provides the first evidence of co-activation of Broca's region and the inferior parietal lobule, succeeding in further specifying the relative contribution of these cortical areas to language processing.
Collapse
Affiliation(s)
- Ekaterini Klepousniotou
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, QC H3A 2B4, Canada; Centre for Research on Language, Mind, and Brain, McGill University, 3640 de la Montagne, Montreal, QC H3G 2A8, Canada.
| | - Vincent L Gracco
- Centre for Research on Language, Mind, and Brain, McGill University, 3640 de la Montagne, Montreal, QC H3G 2A8, Canada; School of Communication Sciences and Disorders, McGill University, 1266 Pine Avenue West, Montreal, QC H3G 1A8, Canada
| | - G Bruce Pike
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, QC H3A 2B4, Canada; Centre for Research on Language, Mind, and Brain, McGill University, 3640 de la Montagne, Montreal, QC H3G 2A8, Canada
| |
Collapse
|
30
|
Deschamps I, Baum SR, Gracco VL. On the role of the supramarginal gyrus in phonological processing and verbal working memory: Evidence from rTMS studies. Neuropsychologia 2014; 53:39-46. [DOI: 10.1016/j.neuropsychologia.2013.10.015] [Citation(s) in RCA: 92] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2013] [Revised: 10/14/2013] [Accepted: 10/24/2013] [Indexed: 11/28/2022]
|
31
|
Mollaei F, Shiller DM, Gracco VL. Sensorimotor adaptation of speech in Parkinson's disease. Mov Disord 2013; 28:1668-74. [PMID: 23861349 DOI: 10.1002/mds.25588] [Citation(s) in RCA: 67] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2012] [Revised: 03/19/2013] [Accepted: 05/10/2013] [Indexed: 11/08/2022] Open
Abstract
The basal ganglia are involved in establishing motor plans for a wide range of behaviors. Parkinson's disease (PD) is a manifestation of basal ganglia dysfunction associated with a deficit in sensorimotor integration and difficulty in acquiring new motor sequences, thereby affecting motor learning. Previous studies of sensorimotor integration and sensorimotor adaptation in PD have focused on limb movements using visual and force-field alterations. Here, we report the results from a sensorimotor adaptation experiment investigating the ability of PD patients to make speech motor adjustments to a constant and predictable auditory feedback manipulation. Participants produced speech while their auditory feedback was altered and maintained in a manner consistent with a change in tongue position. The degree of adaptation was associated with the severity of motor symptoms. The patients with PD exhibited adaptation to the induced sensory error; however, the degree of adaptation was reduced compared with healthy, age-matched control participants. The reduced capacity to adapt to a change in auditory feedback is consistent with reduced gain in the sensorimotor system for speech and with previous studies demonstrating limitations in the adaptation of limb movements after changes in visual feedback among patients with PD.
Collapse
Affiliation(s)
- Fatemeh Mollaei
- Centre for Research on Brain, Language and Music, Montreal, Quebec, Canada; School of Communication Sciences and Disorders, McGill University, Montreal, Quebec, Canada
| | | | | |
Collapse
|
32
|
Arnaud L, Sato M, Ménard L, Gracco VL. Repetition suppression for speech processing in the associative occipital and parietal cortex of congenitally blind adults. PLoS One 2013; 8:e64553. [PMID: 23717628 PMCID: PMC3661538 DOI: 10.1371/journal.pone.0064553] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Accepted: 04/16/2013] [Indexed: 11/18/2022] Open
Abstract
In the congenitally blind (CB), sensory deprivation results in cross-modal plasticity, with visual cortical activity observed for various auditory tasks. This reorganization has been associated with enhanced auditory abilities and the recruitment of visual brain areas during sound and language processing. The questions we addressed are whether visual cortical activity might also be observed in CB during passive listening to auditory speech and whether cross-modal plasticity is associated with adaptive differences in neuronal populations compared to sighted individuals (SI). We focused on the neural substrate of vowel processing in CB and SI adults using a repetition suppression (RS) paradigm. RS has been associated with enhanced or accelerated neural processing efficiency and synchronous activity between interacting brain regions. We evaluated whether cortical areas in CB were sensitive to RS during repeated vowel processing and whether there were differences across the two groups. In accordance with previous studies, both groups displayed a RS effect in the posterior temporal cortex. In the blind, however, additional occipital, temporal and parietal cortical regions were associated with predictive processing of repeated vowel sounds. The findings suggest a more expanded role for cross-modal compensatory effects in blind persons during sound and speech processing and a functional transfer of specific adaptive properties across neural regions as a consequence of sensory deprivation at birth.
Collapse
Affiliation(s)
- Laureline Arnaud
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada
| | - Marc Sato
- Centre for Research on Brain, Language and Music, and GIPSA-lab, Centre national de la recherche scientifique and Grenoble Université, Grenoble, France
| | - Lucie Ménard
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada
- Département de Linguistique, Université du Québec à Montréal, Montréal, Canada
| | - Vincent L. Gracco
- Centre for Research on Brain, Language and Music, McGill University, Montréal, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Canada
- Haskins Laboratories, New Haven, Connecticut, United States of America
- * E-mail:
| |
Collapse
|
33
|
Grabski K, Tremblay P, Gracco VL, Girin L, Sato M. A mediating role of the auditory dorsal pathway in selective adaptation to speech: a state-dependent transcranial magnetic stimulation study. Brain Res 2013; 1515:55-65. [PMID: 23542585 DOI: 10.1016/j.brainres.2013.03.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2012] [Revised: 03/17/2013] [Accepted: 03/21/2013] [Indexed: 11/30/2022]
Abstract
In addition to sensory processing, recent neurobiological models of speech perception postulate the existence of a left auditory dorsal processing stream, linking auditory speech representations in the auditory cortex with articulatory representations in the motor system, through sensorimotor interaction interfaced in the supramarginal gyrus and/or the posterior part of the superior temporal gyrus. The present state-dependent transcranial magnetic stimulation study is aimed at determining whether speech recognition is indeed mediated by the auditory dorsal pathway, by examining the causal contribution of the left ventral premotor cortex, supramarginal gyrus and posterior part of the superior temporal gyrus during an auditory syllable identification/categorization task. To this aim, participants listened to a sequence of /ba/ syllables before undergoing a two forced-choice auditory syllable decision task on ambiguous syllables (ranging in the categorical boundary between /ba/ and /da/). Consistent with previous studies on selective adaptation to speech, following adaptation to /ba/, participants responses were biased towards /da/. In contrast, in a control condition without prior auditory adaptation no such bias was observed. Crucially, compared to the results observed without stimulation, single-pulse transcranial magnetic stimulation delivered at the onset of each target stimulus interacted with the initial state of each of the stimulated brain area by enhancing the adaptation effect. These results demonstrate that the auditory dorsal pathway contribute to auditory speech adaptation.
Collapse
Affiliation(s)
- Krystyna Grabski
- GIPSA-lab, Département Parole & Cognition, CNRS & Grenoble Université, France.
| | | | | | | | | |
Collapse
|
34
|
Abstract
Sensorimotor integration is important for motor learning. The inferior parietal lobe, through its connections with the frontal lobe and cerebellum, has been associated with multisensory integration and sensorimotor adaptation for motor behaviors other than speech. In the present study, the contribution of the inferior parietal cortex to speech motor learning was evaluated using repetitive transcranial magnetic stimulation (rTMS) prior to a speech motor adaptation task. Subjects' auditory feedback was altered in a manner consistent with the auditory consequences of an unintended change in tongue position during speech production, and adaptation performance was used to evaluate sensorimotor plasticity and short-term learning. Prior to the feedback alteration, rTMS or sham stimulation was applied over the left supramarginal gyrus (SMG). Subjects who underwent the sham stimulation exhibited a robust adaptive response to the feedback alteration whereas subjects who underwent rTMS exhibited a diminished adaptive response. The results suggest that the inferior parietal region, in and around SMG, plays a role in sensorimotor adaptation for speech. The interconnections of the inferior parietal cortex with inferior frontal cortex, cerebellum and primary sensory areas suggest that this region may be an important component in learning and adapting sensorimotor patterns for speech.
Collapse
Affiliation(s)
- Mamie Shum
- Neuroscience Major Program, McGill University, Montreal, Quebec, Canada
| | | | | | | |
Collapse
|
35
|
Ito T, Gracco VL, Ostry DJ. Event-Related Potentials Reflect Speech-Relevant Somatosensory-Auditory Interactions. Iperception 2011. [DOI: 10.1068/ic803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022] Open
|
36
|
Tremblay P, Deschamps I, Gracco VL. Regional heterogeneity in the processing and the production of speech in the human planum temporale. Cortex 2011; 49:143-57. [PMID: 22019203 DOI: 10.1016/j.cortex.2011.09.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2010] [Revised: 06/12/2011] [Accepted: 08/31/2011] [Indexed: 10/17/2022]
Abstract
INTRODUCTION The role of the left planum temporale (PT) in auditory language processing has been a central theme in cognitive neuroscience since the first descriptions of its leftward neuroanatomical asymmetry. While it is clear that PT contributes to auditory language processing there is still some uncertainty about its role in spoken language production. METHODS Here we examine activation patterns of the PT for speech production, speech perception and single word reading to address potential hemispheric and regional functional specialization in the human PT. To this aim, we manually segmented the left and right PT in three non-overlapping regions (medial, lateral and caudal PT) and examined, in two complementary experiments, the contribution of exogenous and endogenous auditory input on PT activation under different speech processing and production conditions. RESULTS Our results demonstrate that different speech tasks are associated with different regional functional activation patterns of the medial, lateral and caudal PT. These patterns are similar across hemispheres, suggesting bilateral processing of the auditory signal for speech at the level of PT. CONCLUSIONS Results of the present studies stress the importance of considering the anatomical complexity of the PT in interpreting fMRI data.
Collapse
Affiliation(s)
- Pascale Tremblay
- Center for Mind & Brain Sciences (CIMeC), The University of Trento, Italy.
| | | | | |
Collapse
|
37
|
Abstract
We investigated auditory and somatosensory feedback contributions to the neural control of speech. In task I, sensorimotor adaptation was studied by perturbing one of these sensory modalities or both modalities simultaneously. The first formant (F1) frequency in the auditory feedback was shifted up by a real-time processor and/or the extent of jaw opening was increased or decreased with a force field applied by a robotic device. All eight subjects lowered F1 to compensate for the up-shifted F1 in the feedback signal regardless of whether or not the jaw was perturbed. Adaptive changes in subjects' acoustic output resulted from adjustments in articulatory movements of the jaw or tongue. Adaptation in jaw opening extent in response to the mechanical perturbation occurred only when no auditory feedback perturbation was applied or when the direction of adaptation to the force was compatible with the direction of adaptation to a simultaneous acoustic perturbation. In tasks II and III, subjects' auditory and somatosensory precision and accuracy were estimated. Correlation analyses showed that the relationships 1) between F1 adaptation extent and auditory acuity for F1 and 2) between jaw position adaptation extent and somatosensory acuity for jaw position were weak and statistically not significant. Taken together, the combined findings from this work suggest that, in speech production, sensorimotor adaptation updates the underlying control mechanisms in such a way that the planning of vowel-related articulatory movements takes into account a complex integration of error signals from previous trials but likely with a dominant role for the auditory modality.
Collapse
Affiliation(s)
- Yongqiang Feng
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing, China
| | | | | |
Collapse
|
38
|
Abstract
Background Hearing ability is essential for normal speech development, however the precise mechanisms linking auditory input and the improvement of speaking ability remain poorly understood. Auditory feedback during speech production is believed to play a critical role by providing the nervous system with information about speech outcomes that is used to learn and subsequently fine-tune speech motor output. Surprisingly, few studies have directly investigated such auditory-motor learning in the speech production of typically developing children. Methodology/Principal Findings In the present study, we manipulated auditory feedback during speech production in a group of 9–11-year old children, as well as in adults. Following a period of speech practice under conditions of altered auditory feedback, compensatory changes in speech production and perception were examined. Consistent with prior studies, the adults exhibited compensatory changes in both their speech motor output and their perceptual representations of speech sound categories. The children exhibited compensatory changes in the motor domain, with a change in speech output that was similar in magnitude to that of the adults, however the children showed no reliable compensatory effect on their perceptual representations. Conclusions The results indicate that 9–11-year-old children, whose speech motor and perceptual abilities are still not fully developed, are nonetheless capable of auditory-feedback-based sensorimotor adaptation, supporting a role for such learning processes in speech motor development. Auditory feedback may play a more limited role, however, in the fine-tuning of children's perceptual representations of speech sound categories.
Collapse
Affiliation(s)
- Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, Montreal, Quebec, Canada.
| | | | | |
Collapse
|
39
|
Beal DS, Cheyne DO, Gracco VL, Quraan MA, Taylor MJ, De Nil LF. Auditory evoked fields to vocalization during passive listening and active generation in adults who stutter. Neuroimage 2010; 52:1645-53. [PMID: 20452437 DOI: 10.1016/j.neuroimage.2010.04.277] [Citation(s) in RCA: 48] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2009] [Revised: 04/27/2010] [Accepted: 04/30/2010] [Indexed: 10/19/2022] Open
Abstract
We used magnetoencephalography to investigate auditory evoked responses to speech vocalizations and non-speech tones in adults who do and do not stutter. Neuromagnetic field patterns were recorded as participants listened to a 1 kHz tone, playback of their own productions of the vowel /i/ and vowel-initial words, and actively generated the vowel /i/ and vowel-initial words. Activation of the auditory cortex at approximately 50 and 100 ms was observed during all tasks. A reduction in the peak amplitudes of the M50 and M100 components was observed during the active generation versus passive listening tasks dependent on the stimuli. Adults who stutter did not differ in the amount of speech-induced auditory suppression relative to fluent speakers. Adults who stutter had shorter M100 latencies for the actively generated speaking tasks in the right hemisphere relative to the left hemisphere but the fluent speakers showed similar latencies across hemispheres. During passive listening tasks, adults who stutter had longer M50 and M100 latencies than fluent speakers. The results suggest that there are timing, rather than amplitude, differences in auditory processing during speech in adults who stutter and are discussed in relation to hypotheses of auditory-motor integration breakdown in stuttering.
Collapse
Affiliation(s)
- Deryk S Beal
- Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada.
| | | | | | | | | | | |
Collapse
|
40
|
Tremblay P, Gracco VL. On the selection of words and oral motor responses: Evidence of a response-independent fronto-parietal network. Cortex 2010; 46:15-28. [DOI: 10.1016/j.cortex.2009.03.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2008] [Revised: 12/19/2008] [Accepted: 03/04/2009] [Indexed: 10/21/2022]
|
41
|
Sato M, Tremblay P, Gracco VL. A mediating role of the premotor cortex in phoneme segmentation. Brain Lang 2009; 111:1-7. [PMID: 19362734 DOI: 10.1016/j.bandl.2009.03.002] [Citation(s) in RCA: 95] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2008] [Revised: 02/05/2009] [Accepted: 03/15/2009] [Indexed: 05/27/2023]
Abstract
Consistent with a functional role of the motor system in speech perception, disturbing the activity of the left ventral premotor cortex by means of repetitive transcranial magnetic stimulation (rTMS) has been shown to impair auditory identification of syllables that were masked with white noise. However, whether this region is crucial for speech perception under normal listening conditions remains debated. To directly test this hypothesis, we applied rTMS to the left ventral premotor cortex and participants performed auditory speech tasks involving the same set of syllables but differing in the use of phonemic segmentation processes. Compared to sham stimulation, rTMS applied over the ventral premotor cortex resulted in slower phoneme discrimination requiring phonemic segmentation. No effect was observed in phoneme identification and syllable discrimination tasks that could be performed without need for phonemic segmentation. The findings demonstrate a mediating role of the ventral premotor cortex in speech segmentation under normal listening conditions and are interpreted in relation to theories assuming a link between perception and action in the human speech processing system.
Collapse
Affiliation(s)
- Marc Sato
- Gipsa-Lab, UMR CNRS 5216, Département Parole et Cognition, Grenoble Universités, 38040 Grenoble Cedex 9, France.
| | | | | |
Collapse
|
42
|
Tremblay P, Gracco VL. Contribution of the pre-SMA to the production of words and non-speech oral motor gestures, as revealed by repetitive transcranial magnetic stimulation (rTMS). Brain Res 2009; 1268:112-124. [PMID: 19285972 DOI: 10.1016/j.brainres.2009.02.076] [Citation(s) in RCA: 58] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2008] [Revised: 02/08/2009] [Accepted: 02/27/2009] [Indexed: 11/30/2022]
Abstract
An emerging theoretical perspective, largely based on neuroimaging studies, suggests that the pre-SMA is involved in planning cognitive aspects of motor behavior and language, such as linguistic and non-linguistic response selection. Neuroimaging studies, however, cannot indicate whether a brain region is equally important to all tasks in which it is activated. In the present study, we tested the hypothesis that the pre-SMA is an important component of response selection, using an interference technique. High frequency repetitive TMS (10 Hz) was used to interfere with the functioning of the pre-SMA during tasks requiring selection of words and oral gestures under different selection modes (forced, volitional) and attention levels (high attention, low attention). Results show that TMS applied to the pre-SMA interferes selectively with the volitional selection condition, resulting in longer RTs. The low- and high-attention forced selection conditions were unaffected by TMS, demonstrating that the pre-SMA is sensitive to selection mode but not attentional demands. TMS similarly affected the volitional selection of words and oral gestures, reflecting the response-independent nature of the pre-SMA contribution to response selection. The implications of these results are discussed.
Collapse
Affiliation(s)
- Pascale Tremblay
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Canada; Centre for Research on Language, Mind and Brain, Canada.
| | - Vincent L Gracco
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, 1266 Avenue des Pins, Montreal, Canada; Centre for Research on Language, Mind and Brain, Canada; Haskins Laboratories, New Haven, Connecticut, USA
| |
Collapse
|
43
|
Abstract
The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.
Collapse
Affiliation(s)
- Douglas M Shiller
- School of Communication Sciences and Disorders, McGill University, Montreal, Quebec, Canada.
| | | | | | | |
Collapse
|
44
|
De Nil LF, Beal DS, Lafaille SJ, Kroll RM, Crawley AP, Gracco VL. The effects of simulated stuttering and prolonged speech on the neural activation patterns of stuttering and nonstuttering adults. Brain Lang 2008; 107:114-23. [PMID: 18822455 DOI: 10.1016/j.bandl.2008.07.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2007] [Revised: 06/27/2008] [Accepted: 07/17/2008] [Indexed: 05/21/2023]
Abstract
Functional magnetic resonance imaging was used to investigate the neural correlates of passive listening, habitual speech and two modified speech patterns (simulated stuttering and prolonged speech) in stuttering and nonstuttering adults. Within-group comparisons revealed increased right hemisphere biased activation of speech-related regions during the simulated stuttered and prolonged speech tasks, relative to the habitual speech task, in the stuttering group. No significant activation differences were observed within the nonstuttering participants during these speech conditions. Between-group comparisons revealed less left superior temporal gyrus activation in stutterers during habitual speech and increased right inferior frontal gyrus activation during simulated stuttering relative to nonstutterers. Stutterers were also found to have increased activation in the left middle and superior temporal gyri and right insula, primary motor cortex and supplementary motor cortex during the passive listening condition relative to nonstutterers. The results provide further evidence for the presence of functional deficiencies underlying auditory processing, motor planning and execution in people who stutter, with these differences being affected by speech manner.
Collapse
Affiliation(s)
- Luc F De Nil
- Department of Speech-Language Pathology, University of Toronto, 500 University Avenue, Toronto, Ontario, Canada.
| | | | | | | | | | | |
Collapse
|
45
|
Tremblay P, Shiller DM, Gracco VL. On the time-course and frequency selectivity of the EEG for different modes of response selection: evidence from speech production and keyboard pressing. Clin Neurophysiol 2008; 119:88-99. [PMID: 18320603 DOI: 10.1016/j.clinph.2007.09.063] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To compare brain activity in the alpha and beta bands in relation to different modes of response selection, and to assess the domain generality of the response selection mechanism using verbal and non-verbal tasks. METHODS We examined alpha and beta event-related desynchronization (ERD) to analyze brain reactivity during the selection of verbal (word production) and non-verbal motor actions (keyboard pressing) under two different response modes: externally selected and self-selected. RESULTS An alpha and beta ERD was observed for both the verbal and non-verbal tasks in both the externally and the self-selected modes. For both tasks, the beta ERD started earlier and was longer in the self-selected mode than in the externally selected mode. The overall pattern of results between the verbal and non-verbal motor behaviors was similar. CONCLUSIONS The pattern of alpha and beta ERD is affected by the mode of response selection suggesting that the activity in both frequency bands contributes to the process of selecting actions. We suggest that activity in the alpha band may reflect attentional processes while activity in the beta band may be more closely related to the execution and selection process. SIGNIFICANCE These results suggest that a domain general process contributes to the planning of speech and other motor actions. This finding has potential clinical implications, for the use of diverse motor tasks to treat disorders of motor planning.
Collapse
Affiliation(s)
- Pascale Tremblay
- McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, Montreal, Canada.
| | | | | |
Collapse
|
46
|
Abstract
Stutterers demonstrate unique functional neural activation patterns during speech production, including reduced auditory activation, relative to nonstutterers. The extent to which these functional differences are accompanied by abnormal morphology of the brain in stutterers is unclear. This study examined the neuroanatomical differences in speech-related cortex between stutterers and nonstutterers using voxel-based morphometry. Results revealed significant differences in localized grey matter and white matter densities of left and right hemisphere regions involved in auditory processing and speech production.
Collapse
Affiliation(s)
- Deryk S Beal
- Department of Speech-Language Pathology, University of Toronto, Canada.
| | | | | | | |
Collapse
|
47
|
Tremblay P, Gracco VL. Contribution of the frontal lobe to externally and internally specified verbal responses: fMRI evidence. Neuroimage 2006; 33:947-57. [PMID: 16990015 DOI: 10.1016/j.neuroimage.2006.07.041] [Citation(s) in RCA: 71] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2006] [Revised: 07/13/2006] [Accepted: 07/27/2006] [Indexed: 11/30/2022] Open
Abstract
It has been suggested that within the frontal cortex there is a lateral to medial shift in the control of action, with the lateral premotor area (PMA) involved in externally specified actions and the medial supplementary motor areas (SMA) involved in internally specified actions. Recent brain imaging studies demonstrate, however, that the control of externally and internally specified actions may involve more complex and overlapping networks involving not only the PMA and the SMA, but also the pre-SMA and the lateral prefrontal cortex (PFC). The aim of the present study was to determine whether these frontal regions are differentially involved in the production of verbal responses, when they are externally specified and when they are internally specified. Participants engaged in three overt speaking tasks in which the degree of response specification differed. The tasks involved reading aloud words (externally specified), or generating words aloud from narrow or broad semantic categories (internally specified). Using fMRI, the location and magnitude of the BOLD activity for these tasks was measured in a group of ten participants. Compared with rest, all tasks activated the primary motor area and the SMA-proper, reflecting their common role in speech production. The magnitude of the activity in the PFC (Brodmann area 45), the left PMAv and the pre-SMA increased for word generation, suggesting that each of these three regions plays a role in internally specified action selection. This confirms previous reports concerning the participation of the pre-SMA in verbal response selection. The pattern of activity in PMAv suggests participation in both externally and internally specified verbal actions.
Collapse
Affiliation(s)
- Pascale Tremblay
- Center for Research on Language, Mind and Brain and School of Communication Sciences and Disorders, McGill University, Faculty of Medicine, School of Communication Sciences and Disorders, Montreal, Quebec, Canada H3G 1A8.
| | | |
Collapse
|
48
|
Abstract
Human speech is a well-learned, sensorimotor, and ecological behavior ideal for the study of neural processes and brain-behavior relations. With the advent of modern neuroimaging techniques such as positron emission tomography (PET) and functional magnetic resonance imaging (fMRI), the potential for investigating neural mechanisms of speech motor control, speech motor disorders, and speech motor development has increased. However, a practical issue has limited the application of fMRI to issues in spoken language production and other related behaviors (singing, swallowing). Producing these behaviors during volume acquisition introduces motion-induced signal changes that confound the activation signals of interest. A number of approaches, ranging from signal processing to using silent or covert speech, have attempted to remove or prevent the effects of motion-induced artefact. However, these approaches are flawed for a variety of reasons. An alternative approach, that has only recently been applied to study single-word production, uses pauses in volume acquisition during the production of natural speech motion. Here we present some representative data illustrating the problems associated with motion artefacts and some qualitative results acquired from subjects producing short sentences and orofacial nonspeech movements in the scanner. Using pauses or silent intervals in volume acquisition and block designs, results from individual subjects result in robust activation without motion-induced signal artefact. This approach is an efficient method for studying the neural basis of spoken language production and the effects of speech and language disorders using fMRI.
Collapse
Affiliation(s)
- Vincent L Gracco
- School of Communication Sciences and Disorders, McGill University, Faculty of Medicine, Montreal, Quebec, Canada.
| | | | | |
Collapse
|
49
|
Max L, Gracco VL. Coordination of oral and laryngeal movements in the perceptually fluent speech of adults who stutter. J Speech Lang Hear Res 2005; 48:524-42. [PMID: 16197270 DOI: 10.1044/1092-4388(2005/036)] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2003] [Revised: 07/06/2004] [Accepted: 11/09/2004] [Indexed: 05/04/2023]
Abstract
This work investigated whether stuttering and nonstuttering adults differ in the coordination of oral and laryngeal movements during the production of perceptually fluent speech. This question was addressed by completing correlation analyses that extended previous acoustic studies by others as well as inferential analyses based on the within-subject central tendency and variability of acoustic and physiological indices of oral-laryngeal control and coordination. Stuttering and nonstuttering adults produced the target /p/ as the medial consonant in C(1)V(1)#C(2)V(2)C(3) sequences (C = consonant; V = vowel or diphthong; # = word boundary) embedded in utterances differing in length and location of the target movements. No between-groups differences were found for across- or within-subject correlations between acoustic measures of stop gap and voice onset time (VOT). However, the acoustic data did show longer durations for devoicing interval and VOT in the stuttering versus nonstuttering individuals, in the absence of a difference for a proportional measure specifically reflecting oral-laryngeal relative timing. Analyses of combined kinematic and electroglottographic data revealed that the stuttering individuals' speech was also characterized by (a) longer durations from bilabial closing movement onset and peak velocity to V(1) vocal fold vibration offset and (b) greater within-subject variability for dependent variables that were physiological indices of devoicing interval and VOT, but again no between-groups differences were found for specific indices of oral-laryngeal relative timing. Overall, findings suggest that, for the production of voiceless bilabial stops in perceptually fluent speech, stuttering and nonstuttering adults differ in the duration of intervals defined by events within as well as across the oral and laryngeal subsystems, but the groups show similar patterns of relative timing for the involved oral and laryngeal movements.
Collapse
Affiliation(s)
- Ludo Max
- University of Connecticut, Department of Communication Sciences, Storrs, CT 06269, USA.
| | | |
Collapse
|
50
|
Max L, Guenther FH, Gracco VL, Ghosh SS, Wallace ME. Unstable or Insufficiently Activated Internal Models and Feedback-Biased Motor Control as Sources of Dysfluency: A Theoretical Model of Stuttering. ACTA ACUST UNITED AC 2004. [DOI: 10.1044/cicsd_31_s_105] [Citation(s) in RCA: 168] [Impact Index Per Article: 8.4] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|