1
|
Arya R, Ervin B, Greiner HM, Buroker J, Byars AW, Tenney JR, Arthur TM, Fong SL, Lin N, Frink C, Rozhkov L, Scholle C, Skoch J, Leach JL, Mangano FT, Glauser TA, Hickok G, Holland KD. Emotional facial expression and perioral motor functions of the human auditory cortex. Clin Neurophysiol 2024; 163:102-111. [PMID: 38729074 PMCID: PMC11176009 DOI: 10.1016/j.clinph.2024.04.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Revised: 04/16/2024] [Accepted: 04/17/2024] [Indexed: 05/12/2024]
Abstract
OBJECTIVE We investigated the role of transverse temporal gyrus and adjacent cortex (TTG+) in facial expressions and perioral movements. METHODS In 31 patients undergoing stereo-electroencephalography monitoring, we describe behavioral responses elicited by electrical stimulation within the TTG+. Task-induced high-gamma modulation (HGM), auditory evoked responses, and resting-state connectivity were used to investigate the cortical sites having different types of responses on electrical stimulation. RESULTS Changes in facial expressions and perioral movements were elicited on electrical stimulation within TTG+ in 9 (29%) and 10 (32%) patients, respectively, in addition to the more common language responses (naming interruptions, auditory hallucinations, paraphasic errors). All functional sites showed auditory task induced HGM and evoked responses validating their location within the auditory cortex, however, motor sites showed lower peak amplitudes and longer peak latencies compared to language sites. Significant first-degree connections for motor sites included precentral, anterior cingulate, parahippocampal, and anterior insular gyri, whereas those for language sites included posterior superior temporal, posterior middle temporal, inferior frontal, supramarginal, and angular gyri. CONCLUSIONS Multimodal data suggests that TTG+ may participate in auditory-motor integration. SIGNIFICANCE TTG+ likely participates in facial expressions in response to emotional cues during an auditory discourse.
Collapse
Affiliation(s)
- Ravindra Arya
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA.
| | - Brian Ervin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Electrical Engineering and Computer Science, University of Cincinnati, Cincinnati, OH, USA
| | - Hansel M Greiner
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jason Buroker
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Anna W Byars
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Jeffrey R Tenney
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Todd M Arthur
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Susan L Fong
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Nan Lin
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Clayton Frink
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Leonid Rozhkov
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Craig Scholle
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Jesse Skoch
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - James L Leach
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neuro-radiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Francesco T Mangano
- Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA; Division of Pediatric Neurosurgery, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Tracy A Glauser
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| | - Gregory Hickok
- Department of Cognitive Sciences, Department of Language Science, University of California, Irvine, CA, USA
| | - Katherine D Holland
- Comprehensive Epilepsy Center, Division of Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, USA
| |
Collapse
|
2
|
Lankinen K, Ahveninen J, Uluç I, Daneshzand M, Mareyam A, Kirsch JE, Polimeni JR, Healy BC, Tian Q, Khan S, Nummenmaa A, Wang QM, Green JR, Kimberley TJ, Li S. Role of articulatory motor networks in perceptual categorization of speech signals: a 7T fMRI study. Cereb Cortex 2023; 33:11517-11525. [PMID: 37851854 PMCID: PMC10724868 DOI: 10.1093/cercor/bhad384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 10/20/2023] Open
Abstract
Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Mohammad Daneshzand
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Azma Mareyam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
| | - John E Kirsch
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Brian C Healy
- Partners Multiple Sclerosis Center, Brigham and Women's Hospital, Boston, MA 02115, United States
- Department of Neurology, Harvard Medical School, Boston, MA 02115, United States
- Biostatistics Center, Massachusetts General Hospital, Boston, MA 02114, United States
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Qing Mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, The Teaching Affiliate of Harvard Medical School, Charlestown, MA 02129, United States
| | - Jordan R Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Teresa J Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| |
Collapse
|
3
|
Lankinen K, Ahveninen J, Uluç I, Daneshzand M, Mareyam A, Kirsch JE, Polimeni JR, Healy BC, Tian Q, Khan S, Nummenmaa A, Wang QM, Green JR, Kimberley TJ, Li S. Role of Articulatory Motor Networks in Perceptual Categorization of Speech Signals: A 7 T fMRI Study. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.02.547409. [PMID: 37461673 PMCID: PMC10349975 DOI: 10.1101/2023.07.02.547409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/24/2023]
Abstract
BACKGROUND The association between brain regions involved in speech production and those that play a role in speech perception is not yet fully understood. We compared speech production related brain activity with activations resulting from perceptual categorization of syllables using high field 7 Tesla functional magnetic resonance imaging (fMRI) at 1-mm isotropic voxel resolution, enabling high localization accuracy compared to previous studies. METHODS Blood oxygenation level dependent (BOLD) signals were obtained in 20 normal hearing subjects using a simultaneous multi-slice (SMS) 7T echo-planar imaging (EPI) acquisition with whole-head coverage and 1 mm isotropic resolution. In a speech production localizer task, subjects were asked to produce a silent lip-round vowel /u/ in response to the visual cue "U" or purse their lips when they saw the cue "P". In a phoneme discrimination task, subjects were presented with pairs of syllables, which were equiprobably identical or different along an 8-step continuum between the prototypic /ba/ and /da/ sounds. After the presentation of each stimulus pair, the subjects were asked to indicate whether the two syllables they heard were identical or different by pressing one of two buttons. In a phoneme classification task, the subjects heard only one syllable and asked to indicate whether it was /ba/ or /da/. RESULTS Univariate fMRI analyses using a parametric modulation approach suggested that left motor, premotor, and frontal cortex BOLD activations correlate with phoneme category variability in the /ba/-/da/ discrimination task. In contrast, the variability related to acoustic features of the phonemes were the highest in the right primary auditory cortex. Our multivariate pattern analysis (MVPA) suggested that left precentral/inferior frontal cortex areas, which were associated with speech production according to the localizer task, play a role also in perceptual categorization of the syllables. CONCLUSIONS The results support the hypothesis that articulatory motor networks in the left hemisphere that are activated during speech production could also have a role in perceptual categorization of syllables. Importantly, high voxel-resolution combined with advanced coil technology allowed us to pinpoint the exact brain regions involved in both perception and production tasks.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Mohammad Daneshzand
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Azma Mareyam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
| | - John E. Kirsch
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Brian C. Healy
- Harvard Medical School, Boston, MA, US
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, the teaching affiliate of Harvard Medical School, Charlestown, MA, US
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Qing-mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, the teaching affiliate of Harvard Medical School, Charlestown, MA, US
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions Boston, MA, US
| | - Teresa J. Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA, US
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| |
Collapse
|
4
|
Qu X, Wang Z, Cheng Y, Xue Q, Li Z, Li L, Feng L, Hartwigsen G, Chen L. Neuromodulatory effects of transcranial magnetic stimulation on language performance in healthy participants: Systematic review and meta-analysis. Front Hum Neurosci 2022; 16:1027446. [PMID: 36545349 PMCID: PMC9760723 DOI: 10.3389/fnhum.2022.1027446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Accepted: 11/17/2022] [Indexed: 12/12/2022] Open
Abstract
Background The causal relationships between neural substrates and human language have been investigated by transcranial magnetic stimulation (TMS). However, the robustness of TMS neuromodulatory effects is still largely unspecified. This study aims to systematically examine the efficacy of TMS on healthy participants' language performance. Methods For this meta-analysis, we searched PubMed, Web of Science, PsycINFO, Scopus, and Google Scholar from database inception until October 15, 2022 for eligible TMS studies on language comprehension and production in healthy adults published in English. The quality of the included studies was assessed with the Cochrane risk of bias tool. Potential publication biases were assessed by funnel plots and the Egger Test. We conducted overall as well as moderator meta-analyses. Effect sizes were estimated using Hedges'g (g) and entered into a three-level random effects model. Results Thirty-seven studies (797 participants) with 77 effect sizes were included. The three-level random effects model revealed significant overall TMS effects on language performance in healthy participants (RT: g = 0.16, 95% CI: 0.04-0.29; ACC: g = 0.14, 95% CI: 0.04-0.24). Further moderator analyses indicated that (a) for language tasks, TMS induced significant neuromodulatory effects on semantic and phonological tasks, but didn't show significance for syntactic tasks; (b) for cortical targets, TMS effects were not significant in left frontal, temporal or parietal regions, but were marginally significant in the inferior frontal gyrus in a finer-scale analysis; (c) for stimulation parameters, stimulation sites extracted from previous studies, rTMS, and intensities calibrated to the individual resting motor threshold are more prone to induce robust TMS effects. As for stimulation frequencies and timing, both high and low frequencies, online and offline stimulation elicited significant effects; (d) for experimental designs, studies adopting sham TMS or no TMS as the control condition and within-subject design obtained more significant effects. Discussion Overall, the results show that TMS may robustly modulate healthy adults' language performance and scrutinize the brain-and-language relation in a profound fashion. However, due to limited sample size and constraints in the current meta-analysis approach, analyses at a more comprehensive level were not conducted and results need to be confirmed by future studies. Systematic review registration [https://www.crd.york.ac.uk/PROSPERO/display_record.php?RecordID=366481], identifier [CRD42022366481].
Collapse
Affiliation(s)
- Xingfang Qu
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Zichao Wang
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Yao Cheng
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Qingwei Xue
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Zimu Li
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Lu Li
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Liping Feng
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Luyao Chen
- Max Planck Partner Group, School of International Chinese Language Education, Beijing Normal University, Beijing, China
| |
Collapse
|
5
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
6
|
Tang DL, McDaniel A, Watkins KE. Disruption of speech motor adaptation with repetitive transcranial magnetic stimulation of the articulatory representation in primary motor cortex. Cortex 2021; 145:115-130. [PMID: 34717269 PMCID: PMC8650828 DOI: 10.1016/j.cortex.2021.09.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2020] [Revised: 03/26/2021] [Accepted: 09/13/2021] [Indexed: 11/25/2022]
Abstract
When auditory feedback perturbation is introduced in a predictable way over a number of utterances, speakers learn to compensate by adjusting their own productions, a process known as sensorimotor adaptation. Despite multiple lines of evidence indicating the role of primary motor cortex (M1) in motor learning and memory, whether M1 causally contributes to sensorimotor adaptation in the speech domain remains unclear. Here, we aimed to assay whether temporary disruption of the articulatory representation in left M1 by repetitive transcranial magnetic stimulation (rTMS) impairs speech adaptation. To induce sensorimotor adaptation, the frequencies of first formants (F1) were shifted up and played back to participants when they produced “head”, “bed”, and “dead” repeatedly (the learning phase). A low-frequency rTMS train (.6 Hz, subthreshold, 12 min) over either the tongue or the hand representation of M1 (between-subjects design) was applied before participants experienced altered auditory feedback in the learning phase. We found that the group who received rTMS over the hand representation showed the expected compensatory response for the upwards shift in F1 by significantly reducing F1 and increasing the second formant (F2) frequencies in their productions. In contrast, these expected compensatory changes in both F1 and F2 did not occur in the group that received rTMS over the tongue representation. Critically, rTMS (subthreshold) over the tongue representation did not affect vowel production, which was unchanged from baseline. These results provide direct evidence that the articulatory representation in left M1 causally contributes to sensorimotor learning in speech. Furthermore, these results also suggest that M1 is critical to the network supporting a more global adaptation that aims to move the altered speech production closer to a learnt pattern of speech production used to produce another vowel.
Collapse
Affiliation(s)
- Ding-Lan Tang
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK.
| | - Alexander McDaniel
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| | - Kate E Watkins
- Wellcome Centre for Integrative Neuroimaging, Department of Experimental Psychology, University of Oxford, UK
| |
Collapse
|
7
|
The effects of dual-task interference in predicting turn-ends in speech and music. Brain Res 2021; 1768:147571. [PMID: 34216579 DOI: 10.1016/j.brainres.2021.147571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2020] [Revised: 05/28/2021] [Accepted: 06/23/2021] [Indexed: 11/23/2022]
Abstract
Determining when a partner's spoken or musical turn will end requires well-honed predictive abilities. Evidence suggests that our motor systems are activated during perception of both speech and music, and it has been argued that motor simulation is used to predict turn-ends across domains. Here we used a dual-task interference paradigm to investigate whether motor simulation of our partner's action underlies our ability to make accurate turn-end predictions in speech and in music. Furthermore, we explored how specific this simulation is to the action being predicted. We conducted two experiments, one investigating speech turn-ends, and one investigating music turn-ends. In each, 34 proficient pianists predicted turn-endings while (1) passively listening, (2) producing an effector-specific motor activity (mouth/hand movement), or (3) producing a task- and effector-specific motor activity (mouthing words/fingering a piano melody). In the speech experiment, any movement during speech perception disrupted predictions of spoken turn-ends, whether the movement was task-specific or not. In the music experiment, only task-specific movement (i.e., fingering a piano melody) disrupted predictions of musical turn-ends. These findings support the use of motor simulation to make turn-end predictions in both speech and music but suggest that the specificity of this simulation may differ between domains.
Collapse
|
8
|
Asymmetry of Auditory-Motor Speech Processing is Determined by Language Experience. J Neurosci 2021; 41:1059-1067. [PMID: 33298537 PMCID: PMC7880293 DOI: 10.1523/jneurosci.1977-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2020] [Revised: 10/24/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
Speech processing relies on interactions between auditory and motor systems and is asymmetrically organized in the human brain. The left auditory system is specialized for processing of phonemes, whereas the right is specialized for processing of pitch changes in speech affecting prosody. In speakers of tonal languages, however, processing of pitch (i.e., tone) changes that alter word meaning is left-lateralized indicating that linguistic function and language experience shape speech processing asymmetries. Here, we investigated the asymmetry of motor contributions to auditory speech processing in male and female speakers of tonal and non-tonal languages. We temporarily disrupted the right or left speech motor cortex using transcranial magnetic stimulation (TMS) and measured the impact of these disruptions on auditory discrimination (mismatch negativity; MMN) responses to phoneme and tone changes in sequences of syllables using electroencephalography (EEG). We found that the effect of motor disruptions on processing of tone changes differed between language groups: disruption of the right speech motor cortex suppressed responses to tone changes in non-tonal language speakers, whereas disruption of the left speech motor cortex suppressed responses to tone changes in tonal language speakers. In non-tonal language speakers, the effects of disruption of left speech motor cortex on responses to tone changes were inconclusive. For phoneme changes, disruption of left but not right speech motor cortex suppressed responses in both language groups. We conclude that the contributions of the right and left speech motor cortex to auditory speech processing are determined by the functional roles of acoustic cues in the listener's native language.SIGNIFICANCE STATEMENT The principles underlying hemispheric asymmetries of auditory speech processing remain debated. The asymmetry of processing of speech sounds is affected by low-level acoustic cues, but also by their linguistic function. By combining transcranial magnetic stimulation (TMS) and electroencephalography (EEG), we investigated the asymmetry of motor contributions to auditory speech processing in tonal and non-tonal language speakers. We provide causal evidence that the functional role of the acoustic cues in the listener's native language affects the asymmetry of motor influences on auditory speech discrimination ability [indexed by mismatch negativity (MMN) responses]. Lateralized top-down motor influences can affect asymmetry of speech processing in the auditory system.
Collapse
|
9
|
Michaelis K, Miyakoshi M, Norato G, Medvedev AV, Turkeltaub PE. Motor engagement relates to accurate perception of phonemes and audiovisual words, but not auditory words. Commun Biol 2021; 4:108. [PMID: 33495548 PMCID: PMC7835217 DOI: 10.1038/s42003-020-01634-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 12/15/2020] [Indexed: 11/12/2022] Open
Abstract
A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus. Michaelis et al. used extra-cranial EEG during a forced-choice identification task to investigate the role of the motor system in speech perception. Their findings suggest that left hemisphere dorsal stream motor areas are dynamically engaged during speech perception based on the properties of the stimulus.
Collapse
Affiliation(s)
- Kelly Michaelis
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA.,Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute for Neurological Disorders and Stroke (NINDS), National Institutes of Health, Bethesda, MD, USA
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, San Diego, CA, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Andrei V Medvedev
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA. .,Research Division, Medstar National Rehabilitation Hospital, Washington, DC, USA.
| |
Collapse
|
10
|
Speech-Brain Frequency Entrainment of Dyslexia with and without Phonological Deficits. Brain Sci 2020; 10:brainsci10120920. [PMID: 33260681 PMCID: PMC7760068 DOI: 10.3390/brainsci10120920] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 11/20/2020] [Accepted: 11/26/2020] [Indexed: 12/23/2022] Open
Abstract
Developmental dyslexia is a cognitive disorder characterized by difficulties in linguistic processing. Our purpose is to distinguish subtypes of developmental dyslexia by the level of speech–EEG frequency entrainment (δ: 1–4; β: 12.5–22.5; γ1: 25–35; and γ2: 35–80 Hz) in word/pseudoword auditory discrimination. Depending on the type of disabilities, dyslexics can divide into two subtypes—with less pronounced phonological deficits (NoPhoDys—visual dyslexia) and with more pronounced ones (PhoDys—phonological dyslexia). For correctly recognized stimuli, the δ-entrainment is significantly worse in dyslexic children compared to controls at a level of speech prosody and syllabic analysis. Controls and NoPhoDys show a stronger δ-entrainment in the left-hemispheric auditory cortex (AC), anterior temporal lobe (ATL), frontal, and motor cortices than PhoDys. Dyslexic subgroups concerning normolexics have a deficit of δ-entrainment in the left ATL, inferior frontal gyrus (IFG), and the right AC. PhoDys has higher δ-entrainment in the posterior part of adjacent STS regions than NoPhoDys. Insufficient low-frequency β changes over the IFG, the inferior parietal lobe of PhoDys compared to NoPhoDys correspond to their worse phonological short-term memory. Left-dominant 30 Hz-entrainment for normolexics to phonemic frequencies characterizes the right AC, adjacent regions to superior temporal sulcus of dyslexics. The pronounced 40 Hz-entrainment in PhoDys than the other groups suggest a hearing “reassembly” and a poor phonological working memory. Shifting up to higher-frequency γ-entrainment in the AC of NoPhoDys can lead to verbal memory deficits. Different patterns of cortical reorganization based on the left or right hemisphere lead to differential dyslexic profiles.
Collapse
|
11
|
Bergmann TO, Hartwigsen G. Inferring Causality from Noninvasive Brain Stimulation in Cognitive Neuroscience. J Cogn Neurosci 2020; 33:195-225. [PMID: 32530381 DOI: 10.1162/jocn_a_01591] [Citation(s) in RCA: 122] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Noninvasive brain stimulation (NIBS) techniques, such as transcranial magnetic stimulation or transcranial direct and alternating current stimulation, are advocated as measures to enable causal inference in cognitive neuroscience experiments. Transcending the limitations of purely correlative neuroimaging measures and experimental sensory stimulation, they allow to experimentally manipulate brain activity and study its consequences for perception, cognition, and eventually, behavior. Although this is true in principle, particular caution is advised when interpreting brain stimulation experiments in a causal manner. Research hypotheses are often oversimplified, disregarding the underlying (implicitly assumed) complex chain of causation, namely, that the stimulation technique has to generate an electric field in the brain tissue, which then evokes or modulates neuronal activity both locally in the target region and in connected remote sites of the network, which in consequence affects the cognitive function of interest and eventually results in a change of the behavioral measure. Importantly, every link in this causal chain of effects can be confounded by several factors that have to be experimentally eliminated or controlled to attribute the observed results to their assumed cause. This is complicated by the fact that many of the mediating and confounding variables are not directly observable and dose-response relationships are often nonlinear. We will walk the reader through the chain of causation for a generic cognitive neuroscience NIBS study, discuss possible confounds, and advise appropriate control conditions. If crucial assumptions are explicitly tested (where possible) and confounds are experimentally well controlled, NIBS can indeed reveal cause-effect relationships in cognitive neuroscience studies.
Collapse
Affiliation(s)
| | - Gesa Hartwigsen
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
12
|
Krishnan S, Lima CF, Evans S, Chen S, Guldner S, Yeff H, Manly T, Scott SK. Beatboxers and Guitarists Engage Sensorimotor Regions Selectively When Listening to the Instruments They can Play. Cereb Cortex 2019; 28:4063-4079. [PMID: 30169831 PMCID: PMC6188551 DOI: 10.1093/cercor/bhy208] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2017] [Accepted: 08/04/2018] [Indexed: 12/31/2022] Open
Abstract
Studies of classical musicians have demonstrated that expertise modulates neural responses during auditory perception. However, it remains unclear whether such expertise-dependent plasticity is modulated by the instrument that a musician plays. To examine whether the recruitment of sensorimotor regions during music perception is modulated by instrument-specific experience, we studied nonclassical musicians-beatboxers, who predominantly use their vocal apparatus to produce sound, and guitarists, who use their hands. We contrast fMRI activity in 20 beatboxers, 20 guitarists, and 20 nonmusicians as they listen to novel beatboxing and guitar pieces. All musicians show enhanced activity in sensorimotor regions (IFG, IPC, and SMA), but only when listening to the musical instrument they can play. Using independent component analysis, we find expertise-selective enhancement in sensorimotor networks, which are distinct from changes in attentional networks. These findings suggest that long-term sensorimotor experience facilitates access to the posterodorsal "how" pathway during auditory processing.
Collapse
Affiliation(s)
- Saloni Krishnan
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Experimental Psychology, University of Oxford, Anna Watts Building, Radcliffe Observatory Quarter, Oxford, UK
| | - César F Lima
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Instituto Universitário de Lisboa (ISCTE-IUL), Avenida das Forças Armadas, Lisboa, Portugal
| | - Samuel Evans
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Department of Psychology, University of Westminster, 115 New Cavendish Street, London, UK
| | - Sinead Chen
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| | - Stella Guldner
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK.,Graduate School of Economic and Social Sciences (GESS), University of Mannheim, Mannheim, Germany
| | - Harry Yeff
- Get Involved Ltd, 3 Loughborough Street, London, UK
| | - Tom Manly
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, 17 Queen Square, London, UK
| |
Collapse
|
13
|
Pflug A, Gompf F, Muthuraman M, Groppa S, Kell CA. Differential contributions of the two human cerebral hemispheres to action timing. eLife 2019; 8:e48404. [PMID: 31697640 PMCID: PMC6837842 DOI: 10.7554/elife.48404] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2019] [Accepted: 10/08/2019] [Indexed: 01/22/2023] Open
Abstract
Rhythmic actions benefit from synchronization with external events. Auditory-paced finger tapping studies indicate the two cerebral hemispheres preferentially control different rhythms. It is unclear whether left-lateralized processing of faster rhythms and right-lateralized processing of slower rhythms bases upon hemispheric timing differences that arise in the motor or sensory system or whether asymmetry results from lateralized sensorimotor interactions. We measured fMRI and MEG during symmetric finger tapping, in which fast tapping was defined as auditory-motor synchronization at 2.5 Hz. Slow tapping corresponded to tapping to every fourth auditory beat (0.625 Hz). We demonstrate that the left auditory cortex preferentially represents the relative fast rhythm in an amplitude modulation of low beta oscillations while the right auditory cortex additionally represents the internally generated slower rhythm. We show coupling of auditory-motor beta oscillations supports building a metric structure. Our findings reveal a strong contribution of sensory cortices to hemispheric specialization in action control.
Collapse
Affiliation(s)
- Anja Pflug
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Florian Gompf
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| | - Muthuraman Muthuraman
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Sergiu Groppa
- Movement Disorders and Neurostimulation, Biomedical Statistics and Multimodal Signal Processing Unit, Department of NeurologyJohannes Gutenberg UniversityMainzGermany
| | - Christian Alexander Kell
- Cognitive Neuroscience Group, Brain Imaging Center and Department of NeurologyGoethe UniversityFrankfurtGermany
| |
Collapse
|
14
|
Merten N, Kramme J, Breteler MMB, Herholz SC. Previous Musical Experience and Cortical Thickness Relate to the Beneficial Effect of Motor Synchronization on Auditory Function. Front Neurosci 2019; 13:1042. [PMID: 31611771 PMCID: PMC6777375 DOI: 10.3389/fnins.2019.01042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2019] [Accepted: 09/13/2019] [Indexed: 11/13/2022] Open
Abstract
Auditory processing can be enhanced by motor system activity. During auditory-motor synchronization, motor activity guides auditory attention and thus facilitates auditory processing through active sensing. Previous research on enhanced auditory processing through motor synchronization has been limited to easy tasks with simple stimulus material. Further, the mechanisms and brain regions underlying this synchronization are unclear. We investigated the effect of motor synchronization on auditory processing with naturalistic, musical auditory material in a discrimination task. We further assessed how previous musical training and cortical thickness of specific brain regions relate to different aspects of auditory-motor synchronization. We conducted an auditory-motor experiment in 139 adults. The task involved melody discrimination and beat tapping synchronization. Additionally, 68 participants underwent structural MRI. We found that individuals with better auditory-motor synchronization accuracy showed improved melody discrimination, and that melody discrimination was better in trials with higher tapping accuracy. However, melody discrimination was worse in the tapping than in the listening only condition. Longer previous musical training and thicker Heschl's gyri were associated with better melody discrimination and better tapping synchrony. Post hoc analyses furthermore pointed to a possible moderating role of frontal regions. Our results suggest that motor synchronization can enhance auditory discrimination abilities through active sensing, but that this beneficial effect can be counteracted by dual-task inference when the two tasks are too challenging. Moreover, prior experience and structural brain differences influence the extent to which an individual can benefit from motor synchronization in complex listening. This could inform future research directed at development of personalized training programs for hearing ability.
Collapse
Affiliation(s)
- Natascha Merten
- Population Health Sciences, German Center for Neurodegenerative Diseases, Bonn, Germany
| | - Johanna Kramme
- Population Health Sciences, German Center for Neurodegenerative Diseases, Bonn, Germany
| | - Monique M B Breteler
- Population Health Sciences, German Center for Neurodegenerative Diseases, Bonn, Germany.,Institute for Medical Biometry, Informatics and Epidemiology (IMBIE), Faculty of Medicine, University of Bonn, Bonn, Germany
| | - Sibylle C Herholz
- Population Health Sciences, German Center for Neurodegenerative Diseases, Bonn, Germany
| |
Collapse
|
15
|
Schmitz J, Bartoli E, Maffongelli L, Fadiga L, Sebastian-Galles N, D’Ausilio A. Motor cortex compensates for lack of sensory and motor experience during auditory speech perception. Neuropsychologia 2019; 128:290-296. [DOI: 10.1016/j.neuropsychologia.2018.01.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 12/18/2017] [Accepted: 01/05/2018] [Indexed: 10/18/2022]
|
16
|
Liebenthal E, Möttönen R. An interactive model of auditory-motor speech perception. BRAIN AND LANGUAGE 2018; 187:33-40. [PMID: 29268943 PMCID: PMC6005717 DOI: 10.1016/j.bandl.2017.12.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 12/02/2017] [Indexed: 05/30/2023]
Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham & Women's Hospital, Harvard Medical School, Boston, USA.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
17
|
Saltuklaroglu T, Bowers A, Harkrider AW, Casenhiser D, Reilly KJ, Jenson DE, Thornton D. EEG mu rhythms: Rich sources of sensorimotor information in speech processing. BRAIN AND LANGUAGE 2018; 187:41-61. [PMID: 30509381 DOI: 10.1016/j.bandl.2018.09.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/27/2017] [Accepted: 09/23/2018] [Indexed: 06/09/2023]
Affiliation(s)
- Tim Saltuklaroglu
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA.
| | - Andrew Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, USA
| | - Ashley W Harkrider
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Devin Casenhiser
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Kevin J Reilly
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - David E Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Spokane, WA 99210-1495, USA
| | - David Thornton
- Department of Hearing, Speech, and Language Sciences, Gallaudet University, 800 Florida Avenue NE, Washington, DC 20002, USA
| |
Collapse
|
18
|
Daikoku T, Takahashi Y, Tarumoto N, Yasuda H. Motor Reproduction of Time Interval Depends on Internal Temporal Cues in the Brain: Sensorimotor Imagery in Rhythm. Front Psychol 2018; 9:1873. [PMID: 30333779 PMCID: PMC6176082 DOI: 10.3389/fpsyg.2018.01873] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2018] [Accepted: 09/12/2018] [Indexed: 11/13/2022] Open
Abstract
How the human brain perceives time intervals is a fascinating topic that has been explored in many fields of study. This study examined how time intervals are replicated in three conditions: with no internalized cue (PT), with an internalized cue without a beat (AS), and with an internalized cue with a beat (RS). In PT, participants accurately reproduced the time intervals up to approximately 3 s. Over 3 s, however, the reproduction errors became increasingly negative. In RS, longer presentations of over 5.6 s and 13 beats induced accurate time intervals in reproductions. This suggests longer exposure to beat presentation leads to stable internalization and efficiency in the sensorimotor processing of perception and reproduction. In AS, up to approximately 3 s, the results were similar to those of RS whereas over 3 s, the results shifted and became similar to those of PT. The time intervals between the first two stimuli indicate that the strategies of time-interval reproduction in AS may shift from RS to PT. Neural basis underlying the reproduction of time intervals without a beat may depend on length of time interval between adjacent stimuli in sequences.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Yuji Takahashi
- Faculty of Health Care and Medical Sports, Teikyo Heisei University, Chiba, Japan
| | | | - Hideki Yasuda
- Faculty of Health Care and Medical Sports, Teikyo Heisei University, Chiba, Japan
| |
Collapse
|
19
|
Panouillères MTN, Boyles R, Chesters J, Watkins KE, Möttönen R. Facilitation of motor excitability during listening to spoken sentences is not modulated by noise or semantic coherence. Cortex 2018; 103:44-54. [PMID: 29554541 PMCID: PMC6002609 DOI: 10.1016/j.cortex.2018.02.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2017] [Revised: 11/27/2017] [Accepted: 02/08/2018] [Indexed: 11/15/2022]
Abstract
Comprehending speech can be particularly challenging in a noisy environment and in the absence of semantic context. It has been proposed that the articulatory motor system would be recruited especially in difficult listening conditions. However, it remains unknown how signal-to-noise ratio (SNR) and semantic context affect the recruitment of the articulatory motor system when listening to continuous speech. The aim of the present study was to address the hypothesis that involvement of the articulatory motor cortex increases when the intelligibility and clarity of the spoken sentences decreases, because of noise and the lack of semantic context. We applied Transcranial Magnetic Stimulation (TMS) to the lip and hand representations in the primary motor cortex and measured motor evoked potentials from the lip and hand muscles, respectively, to evaluate motor excitability when young adults listened to sentences. In Experiment 1, we found that the excitability of the lip motor cortex was facilitated during listening to both semantically anomalous and coherent sentences in noise relative to non-speech baselines, but neither SNR nor semantic context modulated the facilitation. In Experiment 2, we replicated these findings and found no difference in the excitability of the lip motor cortex between sentences in noise and clear sentences without noise. Thus, our results show that the articulatory motor cortex is involved in speech processing even in optimal and ecologically valid listening conditions and that its involvement is not modulated by the intelligibility and clarity of speech.
Collapse
Affiliation(s)
| | - Rowan Boyles
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Jennifer Chesters
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom; School of Psychology, University of Nottingham, Nottingham, United Kingdom.
| |
Collapse
|
20
|
Liu Y, Fan H, Li J, Jones JA, Liu P, Zhang B, Liu H. Auditory-Motor Control of Vocal Production during Divided Attention: Behavioral and ERP Correlates. Front Neurosci 2018. [PMID: 29535605 PMCID: PMC5835062 DOI: 10.3389/fnins.2018.00113] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
When people hear unexpected perturbations in auditory feedback, they produce rapid compensatory adjustments of their vocal behavior. Recent evidence has shown enhanced vocal compensations and cortical event-related potentials (ERPs) in response to attended pitch feedback perturbations, suggesting that this reflex-like behavior is influenced by selective attention. Less is known, however, about auditory-motor integration for voice control during divided attention. The present cross-modal study investigated the behavioral and ERP correlates of auditory feedback control of vocal pitch production during divided attention. During the production of sustained vowels, 32 young adults were instructed to simultaneously attend to both pitch feedback perturbations they heard and flashing red lights they saw. The presentation rate of the visual stimuli was varied to produce a low, intermediate, and high attentional load. The behavioral results showed that the low-load condition elicited significantly smaller vocal compensations for pitch perturbations than the intermediate-load and high-load conditions. As well, the cortical processing of vocal pitch feedback was also modulated as a function of divided attention. When compared to the low-load and intermediate-load conditions, the high-load condition elicited significantly larger N1 responses and smaller P2 responses to pitch perturbations. These findings provide the first neurobehavioral evidence that divided attention can modulate auditory feedback control of vocal pitch production.
Collapse
Affiliation(s)
- Ying Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hao Fan
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jingting Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, ON, Canada
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Baofeng Zhang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
21
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
22
|
Rosenblum LD, Dorsi J, Dias JW. The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception. ECOLOGICAL PSYCHOLOGY 2016. [DOI: 10.1080/10407413.2016.1230373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
23
|
Adaptive Plasticity in the Healthy Language Network: Implications for Language Recovery after Stroke. Neural Plast 2016; 2016:9674790. [PMID: 27830094 PMCID: PMC5088318 DOI: 10.1155/2016/9674790] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2016] [Revised: 09/18/2016] [Accepted: 09/25/2016] [Indexed: 12/27/2022] Open
Abstract
Across the last three decades, the application of noninvasive brain stimulation (NIBS) has substantially increased the current knowledge of the brain's potential to undergo rapid short-term reorganization on the systems level. A large number of studies applied transcranial magnetic stimulation (TMS) and transcranial direct current stimulation (tDCS) in the healthy brain to probe the functional relevance and interaction of specific areas for different cognitive processes. NIBS is also increasingly being used to induce adaptive plasticity in motor and cognitive networks and shape cognitive functions. Recently, NIBS has been combined with electrophysiological techniques to modulate neural oscillations of specific cortical networks. In this review, we will discuss recent advances in the use of NIBS to modulate neural activity and effective connectivity in the healthy language network, with a special focus on the combination of NIBS and neuroimaging or electrophysiological approaches. Moreover, we outline how these results can be transferred to the lesioned brain to unravel the dynamics of reorganization processes in poststroke aphasia. We conclude with a critical discussion on the potential of NIBS to facilitate language recovery after stroke and propose a phase-specific model for the application of NIBS in language rehabilitation.
Collapse
|
24
|
Schmitz J, Díaz B, Sebastian-Galles N. Attention modulates somatosensory influences in passive speech listening. JOURNAL OF COGNITIVE PSYCHOLOGY 2016. [DOI: 10.1080/20445911.2016.1206107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
25
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
26
|
Power AJ, Colling LJ, Mead N, Barnes L, Goswami U. Neural encoding of the speech envelope by children with developmental dyslexia. BRAIN AND LANGUAGE 2016; 160:1-10. [PMID: 27433986 PMCID: PMC5108463 DOI: 10.1016/j.bandl.2016.06.006] [Citation(s) in RCA: 94] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2015] [Revised: 05/11/2016] [Accepted: 06/20/2016] [Indexed: 05/10/2023]
Abstract
Developmental dyslexia is consistently associated with difficulties in processing phonology (linguistic sound structure) across languages. One view is that dyslexia is characterised by a cognitive impairment in the "phonological representation" of word forms, which arises long before the child presents with a reading problem. Here we investigate a possible neural basis for developmental phonological impairments. We assess the neural quality of speech encoding in children with dyslexia by measuring the accuracy of low-frequency speech envelope encoding using EEG. We tested children with dyslexia and chronological age-matched (CA) and reading-level matched (RL) younger children. Participants listened to semantically-unpredictable sentences in a word report task. The sentences were noise-vocoded to increase reliance on envelope cues. Envelope reconstruction for envelopes between 0 and 10Hz showed that the children with dyslexia had significantly poorer speech encoding in the 0-2Hz band compared to both CA and RL controls. These data suggest that impaired neural encoding of low frequency speech envelopes, related to speech prosody, may underpin the phonological deficit that causes dyslexia across languages.
Collapse
Affiliation(s)
- Alan J Power
- Centre for Neuroscience in Education, University of Cambridge, Downing St, Cambridge CB2 3EB, UK
| | - Lincoln J Colling
- Centre for Neuroscience in Education, University of Cambridge, Downing St, Cambridge CB2 3EB, UK
| | - Natasha Mead
- Centre for Neuroscience in Education, University of Cambridge, Downing St, Cambridge CB2 3EB, UK
| | - Lisa Barnes
- Centre for Neuroscience in Education, University of Cambridge, Downing St, Cambridge CB2 3EB, UK
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Downing St, Cambridge CB2 3EB, UK.
| |
Collapse
|
27
|
Hall AJ, Butler BE, Lomber SG. The cat's meow: A high-field fMRI assessment of cortical activity in response to vocalizations and complex auditory stimuli. Neuroimage 2016; 127:44-57. [DOI: 10.1016/j.neuroimage.2015.11.056] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Revised: 11/22/2015] [Accepted: 11/24/2015] [Indexed: 01/26/2023] Open
|
28
|
Alho J, Green BM, May PJC, Sams M, Tiitinen H, Rauschecker JP, Jääskeläinen IP. Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage 2016; 129:214-223. [PMID: 26774614 DOI: 10.1016/j.neuroimage.2016.01.016] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 12/17/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland.
| | - Brannon M Green
- Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, D-39118 Magdeburg, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Josef P Rauschecker
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; MEG Core, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland.
| |
Collapse
|
29
|
No evidence of somatotopic place of articulation feature mapping in motor cortex during passive speech perception. Psychon Bull Rev 2015; 23:1231-40. [DOI: 10.3758/s13423-015-0988-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
30
|
Smalle EHM, Rogers J, Möttönen R. Dissociating Contributions of the Motor Cortex to Speech Perception and Response Bias by Using Transcranial Magnetic Stimulation. Cereb Cortex 2015; 25:3690-8. [PMID: 25274987 PMCID: PMC4585509 DOI: 10.1093/cercor/bhu218] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Recent studies using repetitive transcranial magnetic stimulation (TMS) have demonstrated that disruptions of the articulatory motor cortex impair performance in demanding speech perception tasks. These findings have been interpreted as support for the idea that the motor cortex is critically involved in speech perception. However, the validity of this interpretation has been called into question, because it is unknown whether the TMS-induced disruptions in the motor cortex affect speech perception or rather response bias. In the present TMS study, we addressed this question by using signal detection theory to calculate sensitivity (i.e., d') and response bias (i.e., criterion c). We used repetitive TMS to temporarily disrupt the lip or hand representation in the left motor cortex. Participants discriminated pairs of sounds from a "ba"-"da" continuum before TMS, immediately after TMS (i.e., during the period of motor disruption), and after a 30-min break. We found that the sensitivity for between-category pairs was reduced during the disruption of the lip representation. In contrast, disruption of the hand representation temporarily reduced response bias. This double dissociation indicates that the hand motor cortex contributes to response bias during demanding discrimination tasks, whereas the articulatory motor cortex contributes to perception of speech sounds.
Collapse
Affiliation(s)
- Eleonore H. M. Smalle
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
- Psychological Sciences Research Institute, Institute of Neuroscience, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
| | - Jack Rogers
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| |
Collapse
|
31
|
Liu Y, Hu H, Jones JA, Guo Z, Li W, Chen X, Liu P, Liu H. Selective and divided attention modulates auditory-vocal integration in the processing of pitch feedback errors. Eur J Neurosci 2015; 42:1895-904. [PMID: 25969928 DOI: 10.1111/ejn.12949] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2014] [Revised: 04/28/2015] [Accepted: 05/11/2015] [Indexed: 11/29/2022]
Affiliation(s)
- Ying Liu
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
| | - Huijing Hu
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
- Guangdong Provincial Work Injury Rehabilitation Center; Guangzhou China
| | - Jeffery A. Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience; Wilfrid Laurier University; Waterloo ON Canada
| | - Zhiqiang Guo
- Department of Biomedical Engineering; School of Engineering; Sun Yat-sen University; Guangzhou China
| | - Weifeng Li
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
| | - Xi Chen
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
| | - Peng Liu
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
| | - Hanjun Liu
- Department of Rehabilitation Medicine; The First Affiliated Hospital; Sun Yat-sen University; Guangzhou 510080 China
- Department of Biomedical Engineering; School of Engineering; Sun Yat-sen University; Guangzhou China
| |
Collapse
|
32
|
Lévêque Y, Schön D. Modulation of the motor cortex during singing-voice perception. Neuropsychologia 2015; 70:58-63. [DOI: 10.1016/j.neuropsychologia.2015.02.012] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2014] [Revised: 01/16/2015] [Accepted: 02/09/2015] [Indexed: 10/24/2022]
|
33
|
Abstract
All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif ≻ bnif ≻ bdif ≻ lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.
Collapse
|
34
|
Attention modulates cortical processing of pitch feedback errors in voice control. Sci Rep 2015; 5:7812. [PMID: 25589447 PMCID: PMC4295089 DOI: 10.1038/srep07812] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Accepted: 12/10/2014] [Indexed: 11/23/2022] Open
Abstract
Considerable evidence has shown that unexpected alterations in auditory feedback elicit fast compensatory adjustments in vocal production. Although generally thought to be involuntary in nature, whether these adjustments can be influenced by attention remains unknown. The present event-related potential (ERP) study aimed to examine whether neurobehavioral processing of auditory-vocal integration can be affected by attention. While sustaining a vowel phonation and hearing pitch-shifted feedback, participants were required to either ignore the pitch perturbations, or attend to them with low (counting the number of perturbations) or high attentional load (counting the type of perturbations). Behavioral results revealed no systematic change of vocal response to pitch perturbations irrespective of whether they were attended or not. At the level of cortex, there was an enhancement of P2 response to attended pitch perturbations in the low-load condition as compared to when they were ignored. In the high-load condition, however, P2 response did not differ from that in the ignored condition. These findings provide the first neurophysiological evidence that auditory-motor integration in voice control can be modulated as a function of attention at the level of cortex. Furthermore, this modulatory effect does not lead to a general enhancement but is subject to attentional load.
Collapse
|
35
|
Schomers MR, Kirilina E, Weigand A, Bajbouj M, Pulvermüller F. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence. Cereb Cortex 2014; 25:3894-902. [PMID: 25452575 PMCID: PMC4585521 DOI: 10.1093/cercor/bhu274] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Evgeniya Kirilina
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany
| | - Anne Weigand
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany Department of Psychiatry, Charité Universitätsmedizin Berlin, 14050 Berlin, Germany Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA 02215, USA
| | - Malek Bajbouj
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany Department of Psychiatry, Charité Universitätsmedizin Berlin, 14050 Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
36
|
Morillon B, Schroeder CE, Wyart V. Motor contributions to the temporal precision of auditory attention. Nat Commun 2014; 5:5255. [PMID: 25314898 PMCID: PMC4199392 DOI: 10.1038/ncomms6255] [Citation(s) in RCA: 111] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Accepted: 09/12/2014] [Indexed: 11/19/2022] Open
Abstract
In temporal—or dynamic—attending theory, it is proposed that motor activity helps to synchronize temporal fluctuations of attention with the timing of events in a task-relevant stream, thus facilitating sensory selection. Here we develop a mechanistic behavioural account for this theory by asking human participants to track a slow reference beat, by noiseless finger pressing, while extracting auditory target tones delivered on-beat and interleaved with distractors. We find that overt rhythmic motor activity improves the segmentation of auditory information by enhancing sensitivity to target tones while actively suppressing distractor tones. This effect is triggered by cyclic fluctuations in sensory gain locked to individual motor acts, scales parametrically with the temporal predictability of sensory events and depends on the temporal alignment between motor and attention fluctuations. Together, these findings reveal how top-down influences associated with a rhythmic motor routine sharpen sensory representations, enacting auditory ‘active sensing’. Motor activities, such as rhythmic movements, are implicated in regulating attention. Here, the authors find that rhythmic movements sharpen the temporal selection of auditory stimuli by facilitating the perception of relevant stimuli, while actively suppressing the interference from irrelevant stimuli.
Collapse
Affiliation(s)
- Benjamin Morillon
- Department of Psychiatry, Columbia University Medical Center, New York, New York 10032, USA
| | - Charles E Schroeder
- 1] Department of Psychiatry, Columbia University Medical Center, New York, New York 10032, USA [2] Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute, Orangeburg, New York 10962, USA
| | - Valentin Wyart
- Département d'Etudes Cognitives, Laboratoire de Neurosciences Cognitives, Inserm unit 960, Ecole Normale Supérieure, Paris 75005, France
| |
Collapse
|
37
|
Guellaï B, Streri A, Yeung HH. The development of sensorimotor influences in the audiovisual speech domain: some critical questions. Front Psychol 2014; 5:812. [PMID: 25147528 PMCID: PMC4123602 DOI: 10.3389/fpsyg.2014.00812] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2014] [Accepted: 07/09/2014] [Indexed: 11/13/2022] Open
Abstract
Speech researchers have long been interested in how auditory and visual speech signals are integrated, and the recent work has revived interest in the role of speech production with respect to this process. Here, we discuss these issues from a developmental perspective. Because speech perception abilities typically outstrip speech production abilities in infancy and childhood, it is unclear how speech-like movements could influence audiovisual speech perception in development. While work on this question is still in its preliminary stages, there is nevertheless increasing evidence that sensorimotor processes (defined here as any motor or proprioceptive process related to orofacial movements) affect developmental audiovisual speech processing. We suggest three areas on which to focus in future research: (i) the relation between audiovisual speech perception and sensorimotor processes at birth, (ii) the pathways through which sensorimotor processes interact with audiovisual speech processing in infancy, and (iii) developmental change in sensorimotor pathways as speech production emerges in childhood.
Collapse
Affiliation(s)
- Bahia Guellaï
- Laboratoire Ethologie, Cognition, Développement, Université Paris Ouest Nanterre La Défense, NanterreFrance
| | - Arlette Streri
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
| | - H. Henny Yeung
- CNRS, Laboratoire Psychologie de la Perception, UMR 8242, ParisFrance
- Université Paris Descartes, Paris Sorbonne Cité, ParisFrance
| |
Collapse
|
38
|
Möttönen R, Rogers J, Watkins KE. Stimulating the lip motor cortex with transcranial magnetic stimulation. J Vis Exp 2014. [PMID: 24962266 PMCID: PMC4189624 DOI: 10.3791/51665] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022] Open
Abstract
Transcranial magnetic stimulation (TMS) has proven to be a useful tool in investigating the role of the articulatory motor cortex in speech perception. Researchers have used single-pulse and repetitive TMS to stimulate the lip representation in the motor cortex. The excitability of the lip motor representation can be investigated by applying single TMS pulses over this cortical area and recording TMS-induced motor evoked potentials (MEPs) via electrodes attached to the lip muscles (electromyography; EMG). Larger MEPs reflect increased cortical excitability. Studies have shown that excitability increases during listening to speech as well as during viewing speech-related movements. TMS can be used also to disrupt the lip motor representation. A 15-min train of low-frequency sub-threshold repetitive stimulation has been shown to suppress motor excitability for a further 15-20 min. This TMS-induced disruption of the motor lip representation impairs subsequent performance in demanding speech perception tasks and modulates auditory-cortex responses to speech sounds. These findings are consistent with the suggestion that the motor cortex contributes to speech perception. This article describes how to localize the lip representation in the motor cortex and how to define the appropriate stimulation intensity for carrying out both single-pulse and repetitive TMS experiments.
Collapse
Affiliation(s)
- Riikka Möttönen
- Department of Experimental Psychology, University of Oxford;
| | - Jack Rogers
- Department of Experimental Psychology, University of Oxford
| | - Kate E Watkins
- Department of Experimental Psychology, University of Oxford
| |
Collapse
|
39
|
Alho J, Lin FH, Sato M, Tiitinen H, Sams M, Jääskeläinen IP. Enhanced neural synchrony between left auditory and premotor cortex is associated with successful phonetic categorization. Front Psychol 2014; 5:394. [PMID: 24834062 PMCID: PMC4018533 DOI: 10.3389/fpsyg.2014.00394] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Accepted: 04/14/2014] [Indexed: 11/13/2022] Open
Abstract
The cortical dorsal auditory stream has been proposed to mediate mapping between auditory and articulatory-motor representations in speech processing. Whether this sensorimotor integration contributes to speech perception remains an open question. Here, magnetoencephalography was used to examine connectivity between auditory and motor areas while subjects were performing a sensorimotor task involving speech sound identification and overt repetition. Functional connectivity was estimated with inter-areal phase synchrony of electromagnetic oscillations. Structural equation modeling was applied to determine the direction of information flow. Compared to passive listening, engagement in the sensorimotor task enhanced connectivity within 200 ms after sound onset bilaterally between the temporoparietal junction (TPJ) and ventral premotor cortex (vPMC), with the left-hemisphere connection showing directionality from vPMC to TPJ. Passive listening to noisy speech elicited stronger connectivity than clear speech between left auditory cortex (AC) and vPMC at ~100 ms, and between left TPJ and dorsal premotor cortex (dPMC) at ~200 ms. Information flow was estimated from AC to vPMC and from dPMC to TPJ. Connectivity strength among the left AC, vPMC, and TPJ correlated positively with the identification of speech sounds within 150 ms after sound onset, with information flowing from AC to TPJ, from AC to vPMC, and from vPMC to TPJ. Taken together, these findings suggest that sensorimotor integration mediates the categorization of incoming speech sounds through reciprocal auditory-to-motor and motor-to-auditory projections.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Fa-Hsuan Lin
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; Institute of Biomedical Engineering, National Taiwan University Taipei, Taiwan
| | - Marc Sato
- Gipsa-Lab, Department of Speech and Cognition, French National Center for Scientific Research and Grenoble University Grenoble, France
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science (BECS), School of Science, Aalto University Espoo, Finland ; MEG Core, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland ; AMI Centre, Aalto NeuroImaging, School of Science, Aalto University Espoo, Finland
| |
Collapse
|