1
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
2
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
3
|
Amateur singing benefits speech perception in aging under certain conditions of practice: behavioural and neurobiological mechanisms. Brain Struct Funct 2022; 227:943-962. [PMID: 35013775 DOI: 10.1007/s00429-021-02433-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 11/19/2021] [Indexed: 12/21/2022]
Abstract
Limited evidence has shown that practising musical activities in aging, such as choral singing, could lessen age-related speech perception in noise (SPiN) difficulties. However, the robustness and underlying mechanism of action of this phenomenon remain unclear. In this study, we used surface-based morphometry combined with a moderated mediation analytic approach to examine whether singing-related plasticity in auditory and dorsal speech stream regions is associated with better SPiN capabilities. 36 choral singers and 36 non-singers aged 20-87 years underwent cognitive, auditory, and SPiN assessments. Our results provide important new insights into experience-dependent plasticity by revealing that, under certain conditions of practice, amateur choral singing is associated with age-dependent structural plasticity within auditory and dorsal speech regions, which is associated with better SPiN performance in aging. Specifically, the conditions of practice that were associated with benefits on SPiN included frequent weekly practice at home, several hours of weekly group singing practice, singing in multiple languages, and having received formal singing training. These results suggest that amateur choral singing is associated with improved SPiN through a dual mechanism involving auditory processing and auditory-motor integration and may be dose dependent, with more intense singing associated with greater benefit. Our results, thus, reveal that the relationship between singing practice and SPiN is complex, and underscore the importance of considering singing practice behaviours in understanding the effects of musical activities on the brain-behaviour relationship.
Collapse
|
4
|
Brisson V, Tremblay P. Improving speech perception in noise in young and older adults using transcranial magnetic stimulation. BRAIN AND LANGUAGE 2021; 222:105009. [PMID: 34425411 DOI: 10.1016/j.bandl.2021.105009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 08/06/2021] [Accepted: 08/12/2021] [Indexed: 06/13/2023]
Abstract
UNLABELLED Normal aging is associated with speech perception in noise (SPiN) difficulties. The objective of this study was to determine if SPiN performance can be enhanced by intermittent theta-burst stimulation (iTBS) in young and older adults. METHOD We developed a sub-lexical SPiN test to evaluate the contribution of age, hearing, and cognition to SPiN performance in young and older adults. iTBS was applied to the left posterior superior temporal sulcus (pSTS) and the left ventral premotor cortex (PMv) to examine its impact on SPiN performance. RESULTS Aging was associated with reduced SPiN accuracy. TMS-induced performance gain was greater after stimulation of the PMv compared to the pSTS. Participants with lower scores in the baseline condition improved the most. DISCUSSION SPiN difficulties can be reduced by enhancing activity within the left speech-processing network in adults. This study paves the way for the development of TMS-based interventions to reduce SPiN difficulties in adults.
Collapse
Affiliation(s)
- Valérie Brisson
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Département de réadaptation, Université Laval, Québec, Canada; Centre de recherche CERVO, Québec, Canada.
| |
Collapse
|
5
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
6
|
Grabski K, Sato M. Adaptive phonemic coding in the listening and speaking brain. Neuropsychologia 2020; 136:107267. [DOI: 10.1016/j.neuropsychologia.2019.107267] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Revised: 10/23/2019] [Accepted: 11/15/2019] [Indexed: 10/25/2022]
|
7
|
Yen M, DeMarco AT, Wilson SM. Adaptive paradigms for mapping phonological regions in individual participants. Neuroimage 2019; 189:368-379. [PMID: 30665008 DOI: 10.1016/j.neuroimage.2019.01.040] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2018] [Revised: 12/03/2018] [Accepted: 01/15/2019] [Indexed: 11/19/2022] Open
Abstract
Phonological encoding depends on left-lateralized regions in the supramarginal gyrus and the ventral precentral gyrus. Localization of these phonological regions in individual participants-including individuals with language impairments-is important in several research and clinical contexts. To localize these regions, we developed two paradigms that load on phonological encoding: a rhyme judgment task and a syllable counting task. Both paradigms relied on an adaptive staircase design to ensure that each individual performed each task at a similarly challenging level. The goal of this study was to assess the validity and reliability of the two paradigms, in terms of their ability to consistently produce left-lateralized activations of the supramarginal gyrus and ventral precentral gyrus in neurologically normal individuals with presumptively normal language localization. Sixteen participants were scanned with fMRI as they performed the rhyme judgment paradigm, the syllable counting paradigm, and an adaptive semantic paradigm that we have described previously. We found that the rhyme and syllable paradigms both yielded left-lateralized supramarginal and ventral precentral activations in the majority of participants. The rhyme paradigm produced more lateralized and more reliable activations, and so should be favored in future applications. In contrast, the semantic paradigm did not reveal supramarginal or precentral activations in most participants, suggesting that the recruitment of these regions is indeed driven by phonological encoding, not language processing in general. In sum, the adaptive rhyme judgment paradigm was effective in localizing left-lateralized phonological encoding regions in individual participants, and, in conjunction with the adaptive semantic paradigm, can be used to map individual language networks.
Collapse
Affiliation(s)
- Melodie Yen
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - Andrew T DeMarco
- Department of Neurology, Georgetown University Medical Center, Washington, DC, USA
| | - Stephen M Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
8
|
Tremblay P, Perron M, Deschamps I, Kennedy‐Higgins D, Houde J, Dick AS, Descoteaux M. The role of the arcuate and middle longitudinal fasciculi in speech perception in noise in adulthood. Hum Brain Mapp 2019; 40:226-241. [PMID: 30277622 PMCID: PMC6865648 DOI: 10.1002/hbm.24367] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2018] [Revised: 08/07/2018] [Accepted: 08/08/2018] [Indexed: 12/13/2022] Open
Abstract
In this article, we used High Angular Resolution Diffusion Imaging (HARDI) with advanced anatomically constrained particle filtering tractography to investigate the role of the arcuate fasciculus (AF) and the middle longitudinal fasciculus (MdLF) in speech perception in noise in younger and older adults. Fourteen young and 15 elderly adults completed a syllable discrimination task in the presence of broadband masking noise. Mediation analyses revealed few effects of age on white matter (WM) in these fascicles but broad effects of WM on speech perception, independently of age, especially in terms of sensitivity and criterion (response bias), after controlling for individual differences in hearing sensitivity and head size. Indirect (mediated) effects of age on speech perception through WM microstructure were also found, after controlling for individual differences in hearing sensitivity and head size, with AF microstructure related to sensitivity, response bias and phonological priming, and MdLF microstructure more strongly related to response bias. These findings suggest that pathways of the perisylvian region contribute to speech processing abilities, with relatively distinct contributions for the AF (sensitivity) and MdLF (response bias), indicative of a complex contribution of both phonological and cognitive processes to age-related speech perception decline. These results provide new and important insights into the roles of these pathways as well as the factors that may contribute to elderly speech perception deficits. They also highlight the need for a greater focus to be placed on studying the role of WM microstructure to understand cognitive aging.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research CenterQuebec CityCanada
- Département de Réadaptation, Faculté de MédecineUniversité LavalQuebec CityCanada
| | | | - Isabelle Deschamps
- CERVO Brain Research CenterQuebec CityCanada
- Département de Réadaptation, Faculté de MédecineUniversité LavalQuebec CityCanada
| | - Dan Kennedy‐Higgins
- CERVO Brain Research CenterQuebec CityCanada
- Department of Speech, Hearing and Phonetic SciencesUniversity College LondonUnited Kingdom
| | - Jean‐Christophe Houde
- Département d'informatique, Faculté des Sciences, Sherbrooke Connectivity Imaging LabUniversité de SherbrookeSherbrookeCanada
| | | | - Maxime Descoteaux
- Département d'informatique, Faculté des Sciences, Sherbrooke Connectivity Imaging LabUniversité de SherbrookeSherbrookeCanada
| |
Collapse
|
9
|
Thornton D, Harkrider AW, Jenson D, Saltuklaroglu T. Sensorimotor activity measured via oscillations of EEG mu rhythms in speech and non-speech discrimination tasks with and without segmentation demands. BRAIN AND LANGUAGE 2018; 187:62-73. [PMID: 28431691 DOI: 10.1016/j.bandl.2017.03.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/28/2016] [Revised: 01/24/2017] [Accepted: 03/31/2017] [Indexed: 06/07/2023]
Abstract
Better understanding of the role of sensorimotor processing in speech and non-speech segmentation can be achieved with more temporally precise measures. Twenty adults made same/different discriminations of speech and non-speech stimuli pairs, with and without segmentation demands. Independent component analysis of 64-channel EEG data revealed clear sensorimotor mu components, with characteristic alpha and beta peaks, localized to premotor regions in 70% of participants.Time-frequency analyses of mu components from accurate trials showed that (1) segmentation tasks elicited greater event-related synchronization immediately following offset of the first stimulus, suggestive of inhibitory activity; (2) strong late event-related desynchronization in all conditions, suggesting that working memory/covert replay contributed substantially to sensorimotor activity in all conditions; (3) stronger beta desynchronization in speech versus non-speech stimuli during stimulus presentation, suggesting stronger auditory-motor transforms for speech versus non-speech stimuli. Findings support the continued use of oscillatory approaches for helping understand segmentation and other cognitive tasks.
Collapse
Affiliation(s)
- David Thornton
- University of Tennessee Health Science Center, United States.
| | | | - David Jenson
- University of Tennessee Health Science Center, United States
| | | |
Collapse
|
10
|
Saltuklaroglu T, Harkrider AW, Thornton D, Jenson D, Kittilstved T. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults. Neuroimage 2017; 153:232-245. [PMID: 28400266 PMCID: PMC5569894 DOI: 10.1016/j.neuroimage.2017.04.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 01/24/2017] [Accepted: 04/08/2017] [Indexed: 10/19/2022] Open
Abstract
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits.
Collapse
Affiliation(s)
- Tim Saltuklaroglu
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA.
| | - David Thornton
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - David Jenson
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Tiffani Kittilstved
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| |
Collapse
|
11
|
Treille A, Vilain C, Hueber T, Lamalle L, Sato M. Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions. J Cogn Neurosci 2017; 29:448-466. [DOI: 10.1162/jocn_a_01057] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.
Collapse
Affiliation(s)
| | | | | | - Laurent Lamalle
- 2Université Grenoble-Alpes & CHU de Grenoble
- 3CNRS UMS 3552, Grenoble, France
| | - Marc Sato
- 4CNRS UMR 7309 & Aix-Marseille Université
| |
Collapse
|
12
|
Rosenblum LD, Dorsi J, Dias JW. The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception. ECOLOGICAL PSYCHOLOGY 2016. [DOI: 10.1080/10407413.2016.1230373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
13
|
Kleinschmidt DF, Jaeger TF. Re-examining selective adaptation: Fatiguing feature detectors, or distributional learning? Psychon Bull Rev 2016; 23:678-91. [PMID: 26438255 PMCID: PMC4821823 DOI: 10.3758/s13423-015-0943-z] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
When a listener hears many good examples of a /b/ in a row, they are less likely to classify other sounds on, e.g., a /b/-to-/d/ continuum as /b/. This phenomenon is known as selective adaptation and is a well-studied property of speech perception. Traditionally, selective adaptation is seen as a mechanistic property of the speech perception system, and attributed to fatigue in acoustic-phonetic feature detectors. However, recent developments in our understanding of non-linguistic sensory adaptation and higher-level adaptive plasticity in speech perception and language comprehension suggest that it is time to re-visit the phenomenon of selective adaptation. We argue that selective adaptation is better thought of as a computational property of the speech perception system. Drawing on a common thread in recent work on both non-linguistic sensory adaptation and plasticity in language comprehension, we furthermore propose that selective adaptation can be seen as a consequence of distributional learning across multiple levels of representation. This proposal opens up new questions for research on selective adaptation itself, and also suggests that selective adaptation can be an important bridge between work on adaptation in low-level sensory systems and the complicated plasticity of the adult language comprehension system.
Collapse
Affiliation(s)
- Dave F Kleinschmidt
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA.
| | - T Florian Jaeger
- Departments of Brain and Cognitive Sciences, Computer Science, and Linguistics, University of Rochester, Rochester, NY, USA
| |
Collapse
|
14
|
Alho J, Green BM, May PJC, Sams M, Tiitinen H, Rauschecker JP, Jääskeläinen IP. Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage 2016; 129:214-223. [PMID: 26774614 DOI: 10.1016/j.neuroimage.2016.01.016] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 12/17/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland.
| | - Brannon M Green
- Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, D-39118 Magdeburg, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Josef P Rauschecker
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; MEG Core, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland.
| |
Collapse
|
15
|
Hartwigsen G, Weigel A, Schuschan P, Siebner HR, Weise D, Classen J, Saur D. Dissociating Parieto-Frontal Networks for Phonological and Semantic Word Decisions: A Condition-and-Perturb TMS Study. Cereb Cortex 2015; 26:2590-2601. [PMID: 25953770 DOI: 10.1093/cercor/bhv092] [Citation(s) in RCA: 78] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
Affiliation(s)
- Gesa Hartwigsen
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, D-04103 Leipzig, Germany
- Department of Psychology, Christian-Albrechts-University, D-24118 Kiel, Germany
| | - Anni Weigel
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
| | - Paul Schuschan
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark
| | - David Weise
- Human Cortical Physiology and Motor Control Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
| | - Joseph Classen
- Human Cortical Physiology and Motor Control Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
| | - Dorothee Saur
- Language and Aphasia Laboratory, Department of Neurology, University of Leipzig, D-04103 Leipzig, Germany
- Max Planck Institute for Human Cognitive and Brain Sciences, D-04103 Leipzig, Germany
| |
Collapse
|
16
|
Jenson D, Bowers AL, Harkrider AW, Thornton D, Cuellar M, Saltuklaroglu T. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data. Front Psychol 2014; 5:656. [PMID: 25071633 PMCID: PMC4091311 DOI: 10.3389/fpsyg.2014.00656] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022] Open
Abstract
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Megan Cuellar
- Speech-Language Pathology Program, College of Health Sciences, Midwestern UniversityChicago, IL, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
17
|
Deschamps I, Tremblay P. Sequencing at the syllabic and supra-syllabic levels during speech perception: an fMRI study. Front Hum Neurosci 2014; 8:492. [PMID: 25071521 PMCID: PMC4086203 DOI: 10.3389/fnhum.2014.00492] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Accepted: 06/17/2014] [Indexed: 11/13/2022] Open
Abstract
The processing of fluent speech involves complex computational steps that begin with the segmentation of the continuous flow of speech sounds into syllables and words. One question that naturally arises pertains to the type of syllabic information that speech processes act upon. Here, we used functional magnetic resonance imaging to profile regions, using a combination of whole-brain and exploratory anatomical region-of-interest (ROI) approaches, that were sensitive to syllabic information during speech perception by parametrically manipulating syllabic complexity along two dimensions: (1) individual syllable complexity, and (2) sequence complexity (supra-syllabic). We manipulated the complexity of the syllable by using the simplest syllable template—a consonant and vowel (CV)-and inserting an additional consonant to create a complex onset (CCV). The supra-syllabic complexity was manipulated by creating sequences composed of the same syllable repeated six times (e.g., /pa-pa-pa-pa-pa-pa/) and sequences of three different syllables each repeated twice (e.g., /pa-ta-ka-pa-ta-ka/). This parametrical design allowed us to identify brain regions sensitive to (1) syllabic complexity independent of supra-syllabic complexity, (2) supra-syllabic complexity independent of syllabic complexity and, (3) both syllabic and supra-syllabic complexity. High-resolution scans were acquired for 15 healthy adults. An exploratory anatomical ROI analysis of the supratemporal plane (STP) identified bilateral regions within the anterior two-third of the planum temporale, the primary auditory cortices as well as the anterior two-third of the superior temporal gyrus that showed different patterns of sensitivity to syllabic and supra-syllabic information. These findings demonstrate that during passive listening of syllable sequences, sublexical information is processed automatically, and sensitivity to syllabic and supra-syllabic information is localized almost exclusively within the STP.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| | - Pascale Tremblay
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| |
Collapse
|
18
|
Scarbel L, Beautemps D, Schwartz JL, Sato M. The shadow of a doubt? Evidence for perceptuo-motor linkage during auditory and audiovisual close-shadowing. Front Psychol 2014; 5:568. [PMID: 25009512 PMCID: PMC4068292 DOI: 10.3389/fpsyg.2014.00568] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2014] [Accepted: 05/22/2014] [Indexed: 11/30/2022] Open
Abstract
One classical argument in favor of a functional role of the motor system in speech perception comes from the close-shadowing task in which a subject has to identify and to repeat as quickly as possible an auditory speech stimulus. The fact that close-shadowing can occur very rapidly and much faster than manual identification of the speech target is taken to suggest that perceptually induced speech representations are already shaped in a motor-compatible format. Another argument is provided by audiovisual interactions often interpreted as referring to a multisensory-motor framework. In this study, we attempted to combine these two paradigms by testing whether the visual modality could speed motor response in a close-shadowing task. To this aim, both oral and manual responses were evaluated during the perception of auditory and audiovisual speech stimuli, clear or embedded in white noise. Overall, oral responses were faster than manual ones, but it also appeared that they were less accurate in noise, which suggests that motor representations evoked by the speech input could be rough at a first processing stage. In the presence of acoustic noise, the audiovisual modality led to both faster and more accurate responses than the auditory modality. No interaction was however, observed between modality and response. Altogether, these results are interpreted within a two-stage sensory-motor framework, in which the auditory and visual streams are integrated together and with internally generated motor representations before a final decision may be available.
Collapse
Affiliation(s)
- Lucie Scarbel
- CNRS, Grenoble Images Parole Signal Automatique-Lab, Speech and Cognition Department, UMR 5216, Grenoble University Grenoble, France
| | - Denis Beautemps
- CNRS, Grenoble Images Parole Signal Automatique-Lab, Speech and Cognition Department, UMR 5216, Grenoble University Grenoble, France
| | - Jean-Luc Schwartz
- CNRS, Grenoble Images Parole Signal Automatique-Lab, Speech and Cognition Department, UMR 5216, Grenoble University Grenoble, France
| | - Marc Sato
- CNRS, Grenoble Images Parole Signal Automatique-Lab, Speech and Cognition Department, UMR 5216, Grenoble University Grenoble, France
| |
Collapse
|
19
|
Bowers AL, Saltuklaroglu T, Harkrider A, Wilson M, Toner MA. Dynamic modulation of shared sensory and motor cortical rhythms mediates speech and non-speech discrimination performance. Front Psychol 2014; 5:366. [PMID: 24847290 PMCID: PMC4019855 DOI: 10.3389/fpsyg.2014.00366] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 04/07/2014] [Indexed: 01/17/2023] Open
Abstract
Oscillatory models of speech processing have proposed that rhythmic cortical oscillations in sensory and motor regions modulate speech sound processing from the bottom-up via phase reset at low frequencies (3-10 Hz) and from the top-down via the disinhibition of alpha/beta rhythms (8-30 Hz). To investigate how the proposed rhythms mediate perceptual performance, electroencephalographic (EEG) was recorded while participants passively listened to or actively identified speech and tone-sweeps in a two-force choice in noise discrimination task presented at high and low signal-to-noise ratios. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Left and right hemisphere sensorimotor and posterior temporal lobe clusters were identified. Alpha and beta suppression was associated with active tasks only in sensorimotor and temporal clusters. In posterior temporal clusters, increases in phase reset at low frequencies were driven by the quality of bottom-up acoustic information for speech and non-speech stimuli, whereas phase reset in sensorimotor clusters was associated with top-down active task demands. A comparison of correct discrimination trials to those identified at chance showed an earlier performance related effect for the left sensorimotor cluster relative to the left-temporal lobe cluster during the syllable discrimination task only. The right sensorimotor cluster was associated with performance related differences for tone-sweep stimuli only. Findings are consistent with internal model accounts suggesting that early efferent sensorimotor models transmitted along alpha and beta channels reflect a release from inhibition related to active attention to auditory discrimination. Results are discussed in the broader context of dynamic, oscillatory models of cognition proposing that top-down internally generated states interact with bottom-up sensory processing to enhance task performance.
Collapse
Affiliation(s)
- Andrew L Bowers
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Ashley Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Matt Wilson
- School of Allied Health, Northern Illinois University, DeKalb IL, USA
| | - Mary A Toner
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| |
Collapse
|
20
|
Alho J, Lin FH, Sato M, Tiitinen H, Sams M, Jääskeläinen IP. Enhanced neural synchrony between left auditory and premotor cortex is associated with successful phonetic categorization. Front Psychol 2014; 5:394. [PMID: 24834062 PMCID: PMC4018533 DOI: 10.3389/fpsyg.2014.00394] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 04/14/2014] [Indexed: 11/13/2022] Open
Abstract
The cortical dorsal auditory stream has been proposed to mediate mapping between auditory and articulatory-motor representations in speech processing. Whether this sensorimotor integration contributes to speech perception remains an open question. Here, magnetoencephalography was used to examine connectivity between auditory and motor areas while subjects were performing a sensorimotor task involving speech sound identification and overt repetition. Functional connectivity was estimated with inter-areal phase synchrony of electromagnetic oscillations. Structural equation modeling was applied to determine the direction of information flow. Compared to passive listening, engagement in the sensorimotor task enhanced connectivity within 200 ms after sound onset bilaterally between the temporoparietal junction (TPJ) and ventral premotor cortex (vPMC), with the left-hemisphere connection showing directionality from vPMC to TPJ. Passive listening to noisy speech elicited stronger connectivity than clear speech between left auditory cortex (AC) and vPMC at ~100 ms, and between left TPJ and dorsal premotor cortex (dPMC) at ~200 ms. Information flow was estimated from AC to vPMC and from dPMC to TPJ. Connectivity strength among the left AC, vPMC, and TPJ correlated positively with the identification of speech sounds within 150 ms after sound onset, with information flowing from AC to TPJ, from AC to vPMC, and from vPMC to TPJ. Taken together, these findings suggest that sensorimotor integration mediates the categorization of incoming speech sounds through reciprocal auditory-to-motor and motor-to-auditory projections.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Fa-Hsuan Lin
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; Institute of Biomedical Engineering, National Taiwan University Taipei, Taiwan
| | - Marc Sato
- Gipsa-Lab, Department of Speech and Cognition, French National Center for Scientific Research and Grenoble University Grenoble, France
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; MEG Core, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland ; AMI Centre, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland
| |
Collapse
|
21
|
Bilodeau-Mercure M, Lortie CL, Sato M, Guitton MJ, Tremblay P. The neurobiology of speech perception decline in aging. Brain Struct Funct 2014; 220:979-97. [PMID: 24402675 DOI: 10.1007/s00429-013-0695-3] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2013] [Accepted: 12/23/2013] [Indexed: 11/27/2022]
Abstract
Speech perception difficulties are common among elderlies; yet the underlying neural mechanisms are still poorly understood. New empirical evidence suggesting that brain senescence may be an important contributor to these difficulties has challenged the traditional view that peripheral hearing loss was the main factor in the etiology of these difficulties. Here, we investigated the relationship between structural and functional brain senescence and speech perception skills in aging. Following audiometric evaluations, participants underwent MRI while performing a speech perception task at different intelligibility levels. As expected, with age speech perception declined, even after controlling for hearing sensitivity using an audiological measure (pure tone averages), and a bioacoustical measure (DPOAEs recordings). Our results reveal that the core speech network, centered on the supratemporal cortex and ventral motor areas bilaterally, decreased in spatial extent in older adults. Importantly, our results also show that speech skills in aging are affected by changes in cortical thickness and in brain functioning. Age-independent intelligibility effects were found in several motor and premotor areas, including the left ventral premotor cortex and the right supplementary motor area (SMA). Age-dependent intelligibility effects were also found, mainly in sensorimotor cortical areas, and in the left dorsal anterior insula. In this region, changes in BOLD signal modulated the relationship between age and speech perception skills suggesting a role for this region in maintaining speech perception in older ages. These results provide important new insights into the neurobiology of speech perception in aging.
Collapse
Affiliation(s)
- Mylène Bilodeau-Mercure
- Centre de Recherche de l'Institut Universitaire en Santé Mentale de Québec, Quebec City, QC, G1J 2G3, Canada
| | | | | | | | | |
Collapse
|
22
|
Reiterer SM, Hu X, Sumathi TA, Singh NC. Are you a good mimic? Neuro-acoustic signatures for speech imitation ability. Front Psychol 2013; 4:782. [PMID: 24155739 PMCID: PMC3804907 DOI: 10.3389/fpsyg.2013.00782] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2013] [Accepted: 10/04/2013] [Indexed: 01/18/2023] Open
Abstract
We investigated individual differences in speech imitation ability in late bilinguals using a neuro-acoustic approach. One hundred and thirty-eight German-English bilinguals matched on various behavioral measures were tested for "speech imitation ability" in a foreign language, Hindi, and categorized into "high" and "low ability" groups. Brain activations and speech recordings were obtained from 26 participants from the two extreme groups as they performed a functional neuroimaging experiment which required them to "imitate" sentences in three conditions: (A) German, (B) English, and (C) German with fake English accent. We used recently developed novel acoustic analysis, namely the "articulation space" as a metric to compare speech imitation abilities of the two groups. Across all three conditions, direct comparisons between the two groups, revealed brain activations (FWE corrected, p < 0.05) that were more widespread with significantly higher peak activity in the left supramarginal gyrus and postcentral areas for the low ability group. The high ability group, on the other hand showed significantly larger articulation space in all three conditions. In addition, articulation space also correlated positively with imitation ability (Pearson's r = 0.7, p < 0.01). Our results suggest that an expanded articulation space for high ability individuals allows access to a larger repertoire of sounds, thereby providing skilled imitators greater flexibility in pronunciation and language learning.
Collapse
Affiliation(s)
- Susanne M Reiterer
- Unit for Language Learning and Teaching Research, Faculty of Philological and Cultural Studies, University of Vienna Vienna, Austria ; Centre for Integrative Neuroscience and Hertie Institute for Clinical Brain Research, University Clinic Tübingen Tübingen, Germany
| | | | | | | |
Collapse
|