1
|
Perron M, Vuong V, Grassi MW, Imran A, Alain C. Engagement of the speech motor system in challenging speech perception: Activation likelihood estimation meta-analyses. Hum Brain Mapp 2024; 45:e70023. [PMID: 39268584 PMCID: PMC11393483 DOI: 10.1002/hbm.70023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2024] [Revised: 08/20/2024] [Accepted: 08/29/2024] [Indexed: 09/17/2024] Open
Abstract
The relationship between speech production and perception is a topic of ongoing debate. Some argue that there is little interaction between the two, while others claim they share representations and processes. One perspective suggests increased recruitment of the speech motor system in demanding listening situations to facilitate perception. However, uncertainties persist regarding the specific regions involved and the listening conditions influencing its engagement. This study used activation likelihood estimation in coordinate-based meta-analyses to investigate the neural overlap between speech production and three speech perception conditions: speech-in-noise, spectrally degraded speech and linguistically complex speech. Neural overlap was observed in the left frontal, insular and temporal regions. Key nodes included the left frontal operculum (FOC), left posterior lateral part of the inferior frontal gyrus (IFG), left planum temporale (PT), and left pre-supplementary motor area (pre-SMA). The left IFG activation was consistently observed during linguistic processing, suggesting sensitivity to the linguistic content of speech. In comparison, the left pre-SMA activation was observed when processing degraded and noisy signals, indicating sensitivity to signal quality. Activations of the left PT and FOC activation were noted in all conditions, with the posterior FOC area overlapping in all conditions. Our meta-analysis reveals context-independent (FOC, PT) and context-dependent (pre-SMA, posterior lateral IFG) regions within the speech motor system during challenging speech perception. These regions could contribute to sensorimotor integration and executive cognitive control for perception and production.
Collapse
Affiliation(s)
- Maxime Perron
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Veronica Vuong
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| | - Madison W Grassi
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Ashna Imran
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario, Canada
- Institute of Medical Sciences, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada
- Music and Health Science Research Collaboratory, Faculty of Music, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Brain activation during non-habitual speech production: Revisiting the effects of simulated disfluencies in fluent speakers. PLoS One 2020; 15:e0228452. [PMID: 32004353 PMCID: PMC6993970 DOI: 10.1371/journal.pone.0228452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 01/15/2020] [Indexed: 11/19/2022] Open
Abstract
Over the past decades, brain imaging studies in fluently speaking participants have greatly advanced our knowledge of the brain areas involved in speech production. In addition, complementary information has been provided by investigations of brain activation patterns associated with disordered speech. In the present study we specifically aimed to revisit and expand an earlier study by De Nil and colleagues, by investigating the effects of simulating disfluencies on the brain activation patterns of fluent speakers during overt and covert speech production. In contrast to the De Nil et al. study, the current findings show that the production of voluntary, self-generated disfluencies by fluent speakers resulted in increased recruitment and activation of brain areas involved in speech production. These areas show substantial overlap with the neural networks involved in motor sequence learning in general, and learning of speech production, in particular. The implications of these findings for the interpretation of brain imaging studies on disordered and non-habitual speech production are discussed.
Collapse
|
3
|
Corbo D, Orban GA. Observing Others Speak or Sing Activates Spt and Neighboring Parietal Cortex. J Cogn Neurosci 2017; 29:1002-1021. [DOI: 10.1162/jocn_a_01103] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
To obtain further evidence that action observation can serve as a proxy for action execution and planning in posterior parietal cortex, we scanned participants while they were (1) observing two classes of action: vocal communication and oral manipulation, which share the same effector but differ in nature, and (2) rehearsing and listening to nonsense sentences to localize area Spt, thought to be involved in audio-motor transformation during speech. Using this localizer, we found that Spt is specifically activated by vocal communication, indicating that Spt is not only involved in planning speech but also in observing vocal communication actions. In addition, we observed that Spt is distinct from the parietal region most specialized for observing vocal communication, revealed by an interaction contrast and located in PFm. The latter region, unlike Spt, processes the visual and auditory signals related to other's vocal communication independently. Our findings are consistent with the view that several small regions in the temporoparietal cortex near the ventral part of the supramarginal/angular gyrus border are involved in the planning of vocal communication actions and are also concerned with observation of these actions, though involvements in those two aspects are unequal.
Collapse
|
4
|
Tremblay P, Sato M, Deschamps I. Age differences in the motor control of speech: An fMRI study of healthy aging. Hum Brain Mapp 2017; 38:2751-2771. [PMID: 28263012 PMCID: PMC6866863 DOI: 10.1002/hbm.23558] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 01/27/2017] [Accepted: 02/23/2017] [Indexed: 01/08/2023] Open
Abstract
Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Pascale Tremblay
- Université Laval, Departement de ReadaptationFaculté de MedecineQuebec CityQuebecCanada
- Centre de Recherche de l'Institut Universitaire en Sante Mentale de QuébecQuebec CityQuebecCanada
| | - Marc Sato
- Laboratoire Parole & LangageUniversité Aix‐Marseille, CNRSAix‐en‐ProvenceFrance
| | - Isabelle Deschamps
- Université Laval, Departement de ReadaptationFaculté de MedecineQuebec CityQuebecCanada
- Centre de Recherche de l'Institut Universitaire en Sante Mentale de QuébecQuebec CityQuebecCanada
| |
Collapse
|
5
|
Markiewicz CJ, Bohland JW. Mapping the cortical representation of speech sounds in a syllable repetition task. Neuroimage 2016; 141:174-190. [DOI: 10.1016/j.neuroimage.2016.07.023] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2016] [Revised: 07/08/2016] [Accepted: 07/10/2016] [Indexed: 11/17/2022] Open
|
6
|
Tremblay P, Deschamps I, Baroni M, Hasson U. Neural sensitivity to syllable frequency and mutual information in speech perception and production. Neuroimage 2016; 136:106-21. [PMID: 27184201 DOI: 10.1016/j.neuroimage.2016.05.018] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Revised: 03/31/2016] [Accepted: 05/06/2016] [Indexed: 11/29/2022] Open
Abstract
Many factors affect our ability to decode the speech signal, including its quality, the complexity of the elements that compose it, as well as their frequency of occurrence and co-occurrence in a language. Syllable frequency effects have been described in the behavioral literature, including facilitatory effects during speech production and inhibitory effects during word recognition, but the neural mechanisms underlying these effects remain largely unknown. The objective of this study was to examine, using functional neuroimaging, the neurobiological correlates of three different distributional statistics in simple 2-syllable nonwords: the frequency of the first and second syllables, and the mutual information between the syllables. We examined these statistics during nonword perception and production using a powerful single-trial analytical approach. We found that repetition accuracy was higher for nonwords in which the frequency of the first syllable was high. In addition, brain responses to distributional statistics were widespread and almost exclusively cortical. Importantly, brain activity was modulated in a distinct manner for each statistic, with the strongest facilitatory effects associated with the frequency of the first syllable and mutual information. These findings show that distributional statistics modulate nonword perception and production. We discuss the common and unique impact of each distributional statistic on brain activity, as well as task differences.
Collapse
Affiliation(s)
- Pascale Tremblay
- Université Laval, Département de Réadaptation, Québec City, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec (CRIUSMQ), Québec City, QC, Canada.
| | - Isabelle Deschamps
- Université Laval, Département de Réadaptation, Québec City, QC, Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec (CRIUSMQ), Québec City, QC, Canada
| | - Marco Baroni
- Center for Mind and Brain Sciences (CIMeC), Università Degli Studi di Trento, Via delle Regole, 101, I-38060 Mattarello, TN, Italy
| | - Uri Hasson
- Center for Mind and Brain Sciences (CIMeC), Università Degli Studi di Trento, Via delle Regole, 101, I-38060 Mattarello, TN, Italy
| |
Collapse
|
7
|
Ito T, Gracco VL, Ostry DJ. Temporal factors affecting somatosensory-auditory interactions in speech processing. Front Psychol 2014; 5:1198. [PMID: 25452733 PMCID: PMC4233986 DOI: 10.3389/fpsyg.2014.01198] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 10/04/2014] [Indexed: 12/03/2022] Open
Abstract
Speech perception is known to rely on both auditory and visual information. However, sound-specific somatosensory input has been shown also to influence speech perceptual processing (Ito et al., 2009). In the present study, we addressed further the relationship between somatosensory information and speech perceptual processing by addressing the hypothesis that the temporal relationship between orofacial movement and sound processing contributes to somatosensory–auditory interaction in speech perception. We examined the changes in event-related potentials (ERPs) in response to multisensory synchronous (simultaneous) and asynchronous (90 ms lag and lead) somatosensory and auditory stimulation compared to individual unisensory auditory and somatosensory stimulation alone. We used a robotic device to apply facial skin somatosensory deformations that were similar in timing and duration to those experienced in speech production. Following synchronous multisensory stimulation the amplitude of the ERP was reliably different from the two unisensory potentials. More importantly, the magnitude of the ERP difference varied as a function of the relative timing of the somatosensory–auditory stimulation. Event-related activity change due to stimulus timing was seen between 160 and 220 ms following somatosensory onset, mostly around the parietal area. The results demonstrate a dynamic modulation of somatosensory–auditory convergence and suggest the contribution of somatosensory information for speech processing process is dependent on the specific temporal order of sensory inputs in speech production.
Collapse
Affiliation(s)
| | - Vincent L Gracco
- Haskins Laboratories, New Haven , CT, USA ; McGill University, Montréal , QC, Canada
| | - David J Ostry
- Haskins Laboratories, New Haven , CT, USA ; McGill University, Montréal , QC, Canada
| |
Collapse
|
8
|
Chance SA. The cortical microstructural basis of lateralized cognition: a review. Front Psychol 2014; 5:820. [PMID: 25126082 PMCID: PMC4115615 DOI: 10.3389/fpsyg.2014.00820] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 07/10/2014] [Indexed: 01/19/2023] Open
Abstract
The presence of asymmetry in the human cerebral hemispheres is detectable at both the macroscopic and microscopic scales. The horizontal expansion of cortical surface during development (within individual brains), and across evolutionary time (between species), is largely due to the proliferation and spacing of the microscopic vertical columns of cells that form the cortex. In the asymmetric planum temporale (PT), minicolumn width asymmetry is associated with surface area asymmetry. Although the human minicolumn asymmetry is not large, it is estimated to account for a surface area asymmetry of approximately 9% of the region’s size. Critically, this asymmetry of minicolumns is absent in the equivalent areas of the brains of other apes. The left-hemisphere dominance for processing speech is thought to depend, partly, on a bias for higher resolution processing across widely spaced minicolumns with less overlapping dendritic fields, whereas dense minicolumn spacing in the right hemisphere is associated with more overlapping, lower resolution, holistic processing. This concept refines the simple notion that a larger brain area is associated with dominance for a function and offers an alternative explanation associated with “processing type.” This account is mechanistic in the sense that it offers a mechanism whereby asymmetrical components of structure are related to specific functional biases yielding testable predictions, rather than the generalization that “bigger is better” for any given function. Face processing provides a test case – it is the opposite of language, being dominant in the right hemisphere. Consistent with the bias for holistic, configural processing of faces, the minicolumns in the right-hemisphere fusiform gyrus are thinner than in the left hemisphere, which is associated with featural processing. Again, this asymmetry is not found in chimpanzees. The difference between hemispheres may also be seen in terms of processing speed, facilitated by asymmetric myelination of white matter tracts (Anderson et al., 1999 found that axons of the left posterior superior temporal lobe were more thickly myelinated). By cross-referencing the differences between the active fields of the two hemispheres, via tracts such as the corpus callosum, the relationship of local features to global features may be encoded. The emergent hierarchy of features within features is a recursive structure that may functionally contribute to generativity – the ability to perceive and express layers of structure and their relations to each other. The inference is that recursive generativity, an essential component of language, reflects an interaction between processing biases that may be traceable in the microstructure of the cerebral cortex. Minicolumn organization in the PT and the prefrontal cortex has been found to correlate with cognitive scores in humans. Altered minicolumn organization is also observed in neuropsychiatric disorders including autism and schizophrenia. Indeed, altered interhemispheric connections correlated with minicolumn asymmetry in schizophrenia may relate to language-processing anomalies that occur in the disorder. Schizophrenia is associated with over-interpretation of word meaning at the semantic level and over-interpretation of relevance at the level of pragmatic competence, whereas autism is associated with overly literal interpretation of word meaning and under-interpretation of social relevance at the pragmatic level. Both appear to emerge from a disruption of the ability to interpret layers of meaning and their relations to each other. This may be a consequence of disequilibrium in the processing of local and global features related to disorganization of minicolumnar units of processing.
Collapse
Affiliation(s)
- Steven A Chance
- Neuropathology, Nuffield Department of Clinical Neurosciences, Neuroanatomy and Cognition Group, University of Oxford Oxford, UK
| |
Collapse
|
9
|
Deschamps I, Tremblay P. Sequencing at the syllabic and supra-syllabic levels during speech perception: an fMRI study. Front Hum Neurosci 2014; 8:492. [PMID: 25071521 PMCID: PMC4086203 DOI: 10.3389/fnhum.2014.00492] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Accepted: 06/17/2014] [Indexed: 11/13/2022] Open
Abstract
The processing of fluent speech involves complex computational steps that begin with the segmentation of the continuous flow of speech sounds into syllables and words. One question that naturally arises pertains to the type of syllabic information that speech processes act upon. Here, we used functional magnetic resonance imaging to profile regions, using a combination of whole-brain and exploratory anatomical region-of-interest (ROI) approaches, that were sensitive to syllabic information during speech perception by parametrically manipulating syllabic complexity along two dimensions: (1) individual syllable complexity, and (2) sequence complexity (supra-syllabic). We manipulated the complexity of the syllable by using the simplest syllable template—a consonant and vowel (CV)-and inserting an additional consonant to create a complex onset (CCV). The supra-syllabic complexity was manipulated by creating sequences composed of the same syllable repeated six times (e.g., /pa-pa-pa-pa-pa-pa/) and sequences of three different syllables each repeated twice (e.g., /pa-ta-ka-pa-ta-ka/). This parametrical design allowed us to identify brain regions sensitive to (1) syllabic complexity independent of supra-syllabic complexity, (2) supra-syllabic complexity independent of syllabic complexity and, (3) both syllabic and supra-syllabic complexity. High-resolution scans were acquired for 15 healthy adults. An exploratory anatomical ROI analysis of the supratemporal plane (STP) identified bilateral regions within the anterior two-third of the planum temporale, the primary auditory cortices as well as the anterior two-third of the superior temporal gyrus that showed different patterns of sensitivity to syllabic and supra-syllabic information. These findings demonstrate that during passive listening of syllable sequences, sublexical information is processed automatically, and sensitivity to syllabic and supra-syllabic information is localized almost exclusively within the STP.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| | - Pascale Tremblay
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| |
Collapse
|
10
|
Simmonds AJ, Wise RJS, Collins C, Redjep O, Sharp DJ, Iverson P, Leech R. Parallel systems in the control of speech. Hum Brain Mapp 2013; 35:1930-43. [PMID: 23723184 DOI: 10.1002/hbm.22303] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2012] [Revised: 01/24/2013] [Accepted: 03/19/2013] [Indexed: 11/10/2022] Open
Abstract
Modern neuroimaging techniques have advanced our understanding of the distributed anatomy of speech production, beyond that inferred from clinico-pathological correlations. However, much remains unknown about functional interactions between anatomically distinct components of this speech production network. One reason for this is the need to separate spatially overlapping neural signals supporting diverse cortical functions. We took three separate human functional magnetic resonance imaging (fMRI) datasets (two speech production, one "rest"). In each we decomposed the neural activity within the left posterior perisylvian speech region into discrete components. This decomposition robustly identified two overlapping spatio-temporal components, one centered on the left posterior superior temporal gyrus (pSTG), the other on the adjacent ventral anterior parietal lobe (vAPL). The pSTG was functionally connected with bilateral superior temporal and inferior frontal regions, whereas the vAPL was connected with other parietal regions, lateral and medial. Surprisingly, the components displayed spatial anti-correlation, in which the negative functional connectivity of each component overlapped with the other component's positive functional connectivity, suggesting that these two systems operate separately and possibly in competition. The speech tasks reliably modulated activity in both pSTG and vAPL suggesting they are involved in speech production, but their activity patterns dissociate in response to different speech demands. These components were also identified in subjects at "rest" and not engaged in overt speech production. These findings indicate that the neural architecture underlying speech production involves parallel distinct components that converge within posterior peri-sylvian cortex, explaining, in part, why this region is so important for speech production.
Collapse
Affiliation(s)
- Anna J Simmonds
- Computational, Cognitive and Clinical Neuroimaging Laboratory (C3NL), Division of Brain Sciences, Imperial College London, United Kingdom
| | | | | | | | | | | | | |
Collapse
|
11
|
Tremblay P, Dick AS, Small SL. Functional and structural aging of the speech sensorimotor neural system: functional magnetic resonance imaging evidence. Neurobiol Aging 2013; 34:1935-51. [PMID: 23523270 DOI: 10.1016/j.neurobiolaging.2013.02.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2012] [Revised: 01/16/2013] [Accepted: 02/09/2013] [Indexed: 11/19/2022]
Abstract
The ability to perceive and produce speech undergoes important changes in late adulthood. The goal of the present study was to characterize functional and structural age-related differences in the cortical network that support speech perception and production, using magnetic resonance imaging, as well as the relationship between functional and structural age-related changes occurring in this network. We asked young and older adults to observe videos of a speaker producing single words (perception), and to observe and repeat the words produced (production). Results show a widespread bilateral network of brain activation for Perception and Production that was not correlated with age. In addition, several regions did show age-related change (auditory cortex, planum temporale, superior temporal sulcus, premotor cortices, SMA-proper). Examination of the relationship between brain signal and regional and global gray matter volume and cortical thickness revealed a complex set of relationships between structure and function, with some regions showing a relationship between structure and function and some not. The present results provide novel findings about the neurobiology of aging and verbal communication.
Collapse
Affiliation(s)
- Pascale Tremblay
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec, Department of Rehabilitation, Université Laval, Québec City, Québec, Canada.
| | | | | |
Collapse
|
12
|
Fridriksson J, Hubbard HI, Hudspeth SG, Holland AL, Bonilha L, Fromm D, Rorden C. Speech entrainment enables patients with Broca's aphasia to produce fluent speech. Brain 2012; 135:3815-29. [PMID: 23250889 PMCID: PMC3525061 DOI: 10.1093/brain/aws301] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2012] [Revised: 09/17/2012] [Accepted: 09/24/2012] [Indexed: 12/29/2022] Open
Abstract
A distinguishing feature of Broca's aphasia is non-fluent halting speech typically involving one to three words per utterance. Yet, despite such profound impairments, some patients can mimic audio-visual speech stimuli enabling them to produce fluent speech in real time. We call this effect 'speech entrainment' and reveal its neural mechanism as well as explore its usefulness as a treatment for speech production in Broca's aphasia. In Experiment 1, 13 patients with Broca's aphasia were tested in three conditions: (i) speech entrainment with audio-visual feedback where they attempted to mimic a speaker whose mouth was seen on an iPod screen; (ii) speech entrainment with audio-only feedback where patients mimicked heard speech; and (iii) spontaneous speech where patients spoke freely about assigned topics. The patients produced a greater variety of words using audio-visual feedback compared with audio-only feedback and spontaneous speech. No difference was found between audio-only feedback and spontaneous speech. In Experiment 2, 10 of the 13 patients included in Experiment 1 and 20 control subjects underwent functional magnetic resonance imaging to determine the neural mechanism that supports speech entrainment. Group results with patients and controls revealed greater bilateral cortical activation for speech produced during speech entrainment compared with spontaneous speech at the junction of the anterior insula and Brodmann area 47, in Brodmann area 37, and unilaterally in the left middle temporal gyrus and the dorsal portion of Broca's area. Probabilistic white matter tracts constructed for these regions in the normal subjects revealed a structural network connected via the corpus callosum and ventral fibres through the extreme capsule. Unilateral areas were connected via the arcuate fasciculus. In Experiment 3, all patients included in Experiment 1 participated in a 6-week treatment phase using speech entrainment to improve speech production. Behavioural and functional magnetic resonance imaging data were collected before and after the treatment phase. Patients were able to produce a greater variety of words with and without speech entrainment at 1 and 6 weeks after training. Treatment-related decrease in cortical activation associated with speech entrainment was found in areas of the left posterior-inferior parietal lobe. We conclude that speech entrainment allows patients with Broca's aphasia to double their speech output compared with spontaneous speech. Neuroimaging results suggest that speech entrainment allows patients to produce fluent speech by providing an external gating mechanism that yokes a ventral language network that encodes conceptual aspects of speech. Preliminary results suggest that training with speech entrainment improves speech production in Broca's aphasia providing a potential therapeutic method for a disorder that has been shown to be particularly resistant to treatment.
Collapse
Affiliation(s)
- Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, USA.
| | | | | | | | | | | | | |
Collapse
|