1
|
Perron M, Ross B, Alain C. Left motor cortex contributes to auditory phonological discrimination. Cereb Cortex 2024; 34:bhae369. [PMID: 39329356 PMCID: PMC11427950 DOI: 10.1093/cercor/bhae369] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 08/19/2024] [Accepted: 08/27/2024] [Indexed: 09/28/2024] Open
Abstract
Evidence suggests that the articulatory motor system contributes to speech perception in a context-dependent manner. This study tested 2 hypotheses using magnetoencephalography: (i) the motor cortex is involved in phonological processing, and (ii) it aids in compensating for speech-in-noise challenges. A total of 32 young adults performed a phonological discrimination task under 3 noise conditions while their brain activity was recorded using magnetoencephalography. We observed simultaneous activation in the left ventral primary motor cortex and bilateral posterior-superior temporal gyrus when participants correctly identified pairs of syllables. This activation was significantly more pronounced for phonologically different than identical syllable pairs. Notably, phonological differences were resolved more quickly in the left ventral primary motor cortex than in the left posterior-superior temporal gyrus. Conversely, the noise level did not modulate the activity in frontal motor regions and the involvement of the left ventral primary motor cortex in phonological discrimination was comparable across all noise conditions. Our results show that the ventral primary motor cortex is crucial for phonological processing but not for compensation in challenging listening conditions. Simultaneous activation of left ventral primary motor cortex and bilateral posterior-superior temporal gyrus supports an interactive model of speech perception, where auditory and motor regions shape perception. The ventral primary motor cortex may be involved in a predictive coding mechanism that influences auditory-phonetic processing.
Collapse
Affiliation(s)
- Maxime Perron
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St, North York, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St, North York, ON M6A 2E1, Canada
- Department of Medical Biophysics, University of Toronto, 101 College Street, Toronto, ON M5G 1L7, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Academy for Research and Education, 3560 Bathurst St, North York, ON M6A 2E1, Canada
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
- Institute of Medical Science, University of Toronto, 6 Queen’s Park Crescent,Toronto, ON M5S 3H2, Canada
- Music and Health Science Research Collaboratory, University of Toronto, 90 Wellesley Street West Toronto, ON M5S 1C5, Canada
| |
Collapse
|
2
|
Lankinen K, Ahveninen J, Uluç I, Daneshzand M, Mareyam A, Kirsch JE, Polimeni JR, Healy BC, Tian Q, Khan S, Nummenmaa A, Wang QM, Green JR, Kimberley TJ, Li S. Role of articulatory motor networks in perceptual categorization of speech signals: a 7T fMRI study. Cereb Cortex 2023; 33:11517-11525. [PMID: 37851854 PMCID: PMC10724868 DOI: 10.1093/cercor/bhad384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 10/20/2023] Open
Abstract
Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Mohammad Daneshzand
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Azma Mareyam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
| | - John E Kirsch
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Brian C Healy
- Partners Multiple Sclerosis Center, Brigham and Women's Hospital, Boston, MA 02115, United States
- Department of Neurology, Harvard Medical School, Boston, MA 02115, United States
- Biostatistics Center, Massachusetts General Hospital, Boston, MA 02114, United States
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Qing Mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, The Teaching Affiliate of Harvard Medical School, Charlestown, MA 02129, United States
| | - Jordan R Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Teresa J Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| |
Collapse
|
3
|
Xie X, Jaeger TF, Kurumada C. What we do (not) know about the mechanisms underlying adaptive speech perception: A computational framework and review. Cortex 2023; 166:377-424. [PMID: 37506665 DOI: 10.1016/j.cortex.2023.05.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 12/23/2022] [Accepted: 05/05/2023] [Indexed: 07/30/2023]
Abstract
Speech from unfamiliar talkers can be difficult to comprehend initially. These difficulties tend to dissipate with exposure, sometimes within minutes or less. Adaptivity in response to unfamiliar input is now considered a fundamental property of speech perception, and research over the past two decades has made substantial progress in identifying its characteristics. The mechanisms underlying adaptive speech perception, however, remain unknown. Past work has attributed facilitatory effects of exposure to any one of three qualitatively different hypothesized mechanisms: (1) low-level, pre-linguistic, signal normalization, (2) changes in/selection of linguistic representations, or (3) changes in post-perceptual decision-making. Direct comparisons of these hypotheses, or combinations thereof, have been lacking. We describe a general computational framework for adaptive speech perception (ASP) that-for the first time-implements all three mechanisms. We demonstrate how the framework can be used to derive predictions for experiments on perception from the acoustic properties of the stimuli. Using this approach, we find that-at the level of data analysis presently employed by most studies in the field-the signature results of influential experimental paradigms do not distinguish between the three mechanisms. This highlights the need for a change in research practices, so that future experiments provide more informative results. We recommend specific changes to experimental paradigms and data analysis. All data and code for this study are shared via OSF, including the R markdown document that this article is generated from, and an R library that implements the models we present.
Collapse
Affiliation(s)
- Xin Xie
- Language Science, University of California, Irvine, USA.
| | - T Florian Jaeger
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA; Computer Science, University of Rochester, Rochester, NY, USA
| | - Chigusa Kurumada
- Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA
| |
Collapse
|
4
|
Liang B, Li Y, Zhao W, Du Y. Bilateral human laryngeal motor cortex in perceptual decision of lexical tone and voicing of consonant. Nat Commun 2023; 14:4710. [PMID: 37543659 PMCID: PMC10404239 DOI: 10.1038/s41467-023-40445-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Accepted: 07/27/2023] [Indexed: 08/07/2023] Open
Abstract
Speech perception is believed to recruit the left motor cortex. However, the exact role of the laryngeal subregion and its right counterpart in speech perception, as well as their temporal patterns of involvement remain unclear. To address these questions, we conducted a hypothesis-driven study, utilizing transcranial magnetic stimulation on the left or right dorsal laryngeal motor cortex (dLMC) when participants performed perceptual decision on Mandarin lexical tone or consonant (voicing contrast) presented with or without noise. We used psychometric function and hierarchical drift-diffusion model to disentangle perceptual sensitivity and dynamic decision-making parameters. Results showed that bilateral dLMCs were engaged with effector specificity, and this engagement was left-lateralized with right upregulation in noise. Furthermore, the dLMC contributed to various decision stages depending on the hemisphere and task difficulty. These findings substantially advance our understanding of the hemispherical lateralization and temporal dynamics of bilateral dLMC in sensorimotor integration during speech perceptual decision-making.
Collapse
Affiliation(s)
- Baishen Liang
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Yanchang Li
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Wanying Zhao
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China
| | - Yi Du
- Institute of Psychology, CAS Key Laboratory of Behavioral Science, Chinese Academy of Sciences, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
- CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, 200031, China.
- Chinese Institute for Brain Research, Beijing, 102206, China.
| |
Collapse
|
5
|
Lankinen K, Ahveninen J, Uluç I, Daneshzand M, Mareyam A, Kirsch JE, Polimeni JR, Healy BC, Tian Q, Khan S, Nummenmaa A, Wang QM, Green JR, Kimberley TJ, Li S. Role of Articulatory Motor Networks in Perceptual Categorization of Speech Signals: A 7 T fMRI Study. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.02.547409. [PMID: 37461673 PMCID: PMC10349975 DOI: 10.1101/2023.07.02.547409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/24/2023]
Abstract
BACKGROUND The association between brain regions involved in speech production and those that play a role in speech perception is not yet fully understood. We compared speech production related brain activity with activations resulting from perceptual categorization of syllables using high field 7 Tesla functional magnetic resonance imaging (fMRI) at 1-mm isotropic voxel resolution, enabling high localization accuracy compared to previous studies. METHODS Blood oxygenation level dependent (BOLD) signals were obtained in 20 normal hearing subjects using a simultaneous multi-slice (SMS) 7T echo-planar imaging (EPI) acquisition with whole-head coverage and 1 mm isotropic resolution. In a speech production localizer task, subjects were asked to produce a silent lip-round vowel /u/ in response to the visual cue "U" or purse their lips when they saw the cue "P". In a phoneme discrimination task, subjects were presented with pairs of syllables, which were equiprobably identical or different along an 8-step continuum between the prototypic /ba/ and /da/ sounds. After the presentation of each stimulus pair, the subjects were asked to indicate whether the two syllables they heard were identical or different by pressing one of two buttons. In a phoneme classification task, the subjects heard only one syllable and asked to indicate whether it was /ba/ or /da/. RESULTS Univariate fMRI analyses using a parametric modulation approach suggested that left motor, premotor, and frontal cortex BOLD activations correlate with phoneme category variability in the /ba/-/da/ discrimination task. In contrast, the variability related to acoustic features of the phonemes were the highest in the right primary auditory cortex. Our multivariate pattern analysis (MVPA) suggested that left precentral/inferior frontal cortex areas, which were associated with speech production according to the localizer task, play a role also in perceptual categorization of the syllables. CONCLUSIONS The results support the hypothesis that articulatory motor networks in the left hemisphere that are activated during speech production could also have a role in perceptual categorization of syllables. Importantly, high voxel-resolution combined with advanced coil technology allowed us to pinpoint the exact brain regions involved in both perception and production tasks.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Mohammad Daneshzand
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Azma Mareyam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
| | - John E. Kirsch
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Brian C. Healy
- Harvard Medical School, Boston, MA, US
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, the teaching affiliate of Harvard Medical School, Charlestown, MA, US
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| | - Qing-mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, the teaching affiliate of Harvard Medical School, Charlestown, MA, US
| | - Jordan R. Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions Boston, MA, US
| | - Teresa J. Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA, US
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Boston, MA, US
- Harvard Medical School, Boston, MA, US
| |
Collapse
|
6
|
Berent I, Fried PJ, Theodore RM, Manning D, Pascual-Leone A. Phonetic categorization relies on motor simulation, but combinatorial phonological computations are abstract. Sci Rep 2023; 13:874. [PMID: 36650234 PMCID: PMC9845317 DOI: 10.1038/s41598-023-28099-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2022] [Accepted: 01/12/2023] [Indexed: 01/19/2023] Open
Abstract
To identify a spoken word (e.g., dog), people must categorize the speech steam onto distinct units (e.g., contrast dog/fog,) and extract their combinatorial structure (e.g., distinguish dog/god). However, the mechanisms that support these two core functions are not fully understood. Here, we explore this question using transcranial magnetic stimulation (TMS). We show that speech categorization engages the motor system, as stimulating the lip motor area has opposite effects on labial (ba/pa)- and coronal (da/ta) sounds. In contrast, the combinatorial computation of syllable structure engages Broca's area, as its stimulation disrupts sensitivity to syllable structure (compared to motor stimulation). We conclude that the two ingredients of language-categorization and combination-are distinct functions in human brains.
Collapse
Affiliation(s)
- Iris Berent
- Department of Psychology, Northeastern University, Boston, MA, USA.
| | - Peter J Fried
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Rachel M Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, USA
| | - Daniel Manning
- Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center, Boston, MA, USA
| | - Alvaro Pascual-Leone
- Department of Neurology, Harvard Medical School, Boston, MA, USA
- Hinda and Arthur Marcus Institute for Aging Research, Deanna and Sidney Center for Memory Health, Hebrew SeniorLife, Boston, MA, USA
- Guttmann Brain Health Institute, Barcelona, Spain
| |
Collapse
|
7
|
Dynamic auditory contributions to error detection revealed in the discrimination of Same and Different syllable pairs. Neuropsychologia 2022; 176:108388. [PMID: 36183800 DOI: 10.1016/j.neuropsychologia.2022.108388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 09/20/2022] [Accepted: 09/27/2022] [Indexed: 11/22/2022]
Abstract
During speech production auditory regions operate in concert with the anterior dorsal stream to facilitate online error detection. As the dorsal stream also is known to activate in speech perception, the purpose of the current study was to probe the role of auditory regions in error detection during auditory discrimination tasks as stimuli are encoded and maintained in working memory. A priori assumptions are that sensory mismatch (i.e., error) occurs during the discrimination of Different (mismatched) but not Same (matched) syllable pairs. Independent component analysis was applied to raw EEG data recorded from 42 participants to identify bilateral auditory alpha rhythms, which were decomposed across time and frequency to reveal robust patterns of event related synchronization (ERS; inhibition) and desynchronization (ERD; processing) over the time course of discrimination events. Results were characterized by bilateral peri-stimulus alpha ERD transitioning to alpha ERS in the late trial epoch, with ERD interpreted as evidence of working memory encoding via Analysis by Synthesis and ERS considered evidence of speech-induced-suppression arising during covert articulatory rehearsal to facilitate working memory maintenance. The transition from ERD to ERS occurred later in the left hemisphere in Different trials than in Same trials, with ERD and ERS temporally overlapping during the early post-stimulus window. Results were interpreted to suggest that the sensory mismatch (i.e., error) arising from the comparison of the first and second syllable elicits further processing in the left hemisphere to support working memory encoding and maintenance. Results are consistent with auditory contributions to error detection during both encoding and maintenance stages of working memory, with encoding stage error detection associated with stimulus concordance and maintenance stage error detection associated with task-specific retention demands.
Collapse
|
8
|
Rogalsky C, Basilakos A, Rorden C, Pillay S, LaCroix AN, Keator L, Mickelsen S, Anderson SW, Love T, Fridriksson J, Binder J, Hickok G. The Neuroanatomy of Speech Processing: A Large-scale Lesion Study. J Cogn Neurosci 2022; 34:1355-1375. [PMID: 35640102 PMCID: PMC9274306 DOI: 10.1162/jocn_a_01876] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The neural basis of language has been studied for centuries, yet the networks critically involved in simply identifying or understanding a spoken word remain elusive. Several functional-anatomical models of critical neural substrates of receptive speech have been proposed, including (1) auditory-related regions in the left mid-posterior superior temporal lobe, (2) motor-related regions in the left frontal lobe (in normal and/or noisy conditions), (3) the left anterior superior temporal lobe, or (4) bilateral mid-posterior superior temporal areas. One difficulty in comparing these models is that they often focus on different aspects of the sound-to-meaning pathway and are supported by different types of stimuli and tasks. Two auditory tasks that are typically used in separate studies-syllable discrimination and word comprehension-often yield different conclusions. We assessed syllable discrimination (words and nonwords) and word comprehension (clear speech and with a noise masker) in 158 individuals with focal brain damage: left (n = 113) or right (n = 19) hemisphere stroke, left (n = 18) or right (n = 8) anterior temporal lobectomy, and 26 neurologically intact controls. Discrimination and comprehension tasks are doubly dissociable both behaviorally and neurologically. In support of a bilateral model, clear speech comprehension was near ceiling in 95% of left stroke cases and right temporal damage impaired syllable discrimination. Lesion-symptom mapping analyses for the syllable discrimination and noisy word comprehension tasks each implicated most of the left superior temporal gyrus. Comprehension but not discrimination tasks also implicated the left posterior middle temporal gyrus, whereas discrimination but not comprehension tasks also implicated more dorsal sensorimotor regions in posterior perisylvian cortex.
Collapse
|
9
|
Jenson D, Saltuklaroglu T. Sensorimotor contributions to working memory differ between the discrimination of Same and Different syllable pairs. Neuropsychologia 2021; 159:107947. [PMID: 34216594 DOI: 10.1016/j.neuropsychologia.2021.107947] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2020] [Revised: 02/01/2021] [Accepted: 06/27/2021] [Indexed: 10/21/2022]
Abstract
Sensorimotor activity during speech perception is both pervasive and highly variable, changing as a function of the cognitive demands imposed by the task. The purpose of the current study was to evaluate whether the discrimination of Same (matched) and Different (unmatched) syllable pairs elicit different patterns of sensorimotor activity as stimuli are processed in working memory. Raw EEG data recorded from 42 participants were decomposed with independent component analysis to identify bilateral sensorimotor mu rhythms from 36 subjects. Time frequency decomposition of mu rhythms revealed concurrent event related desynchronization (ERD) in alpha and beta frequency bands across the peri- and post-stimulus time periods, which were interpreted as evidence of sensorimotor contributions to working memory encoding and maintenance. Left hemisphere alpha/beta ERD was stronger in Different trials than Same trials during the post-stimulus period, while right hemisphere alpha/beta ERD was stronger in Same trials than Different trials. A between-hemispheres contrast revealed no differences during Same trials, while post-stimulus alpha/beta ERD was stronger in the left hemisphere than the right during Different trials. Results were interpreted to suggest that predictive coding mechanisms lead to repetition suppression effects in Same trials. Mismatches arising from predictive coding mechanisms in Different trials shift subsequent working memory processing to the speech-dominant left hemisphere. Findings clarify how sensorimotor activity differentially supports working memory encoding and maintenance stages during speech discrimination tasks and have potential to inform sensorimotor models of speech perception and working memory.
Collapse
Affiliation(s)
- David Jenson
- Washington State University, Elson S. Floyd College of Medicine, Department of Speech and Hearing Sciences, Spokane, WA, USA.
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, College of Health Professions, Department of Audiology and Speech-Pathology, Knoxville, TN, USA
| |
Collapse
|
10
|
Maffei V, Indovina I, Mazzarella E, Giusti MA, Macaluso E, Lacquaniti F, Viviani P. Sensitivity of occipito-temporal cortex, premotor and Broca's areas to visible speech gestures in a familiar language. PLoS One 2020; 15:e0234695. [PMID: 32559213 PMCID: PMC7304574 DOI: 10.1371/journal.pone.0234695] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Accepted: 06/01/2020] [Indexed: 11/18/2022] Open
Abstract
When looking at a speaking person, the analysis of facial kinematics contributes to language discrimination and to the decoding of the time flow of visual speech. To disentangle these two factors, we investigated behavioural and fMRI responses to familiar and unfamiliar languages when observing speech gestures with natural or reversed kinematics. Twenty Italian volunteers viewed silent video-clips of speech shown as recorded (Forward, biological motion) or reversed in time (Backward, non-biological motion), in Italian (familiar language) or Arabic (non-familiar language). fMRI revealed that language (Italian/Arabic) and time-rendering (Forward/Backward) modulated distinct areas in the ventral occipito-temporal cortex, suggesting that visual speech analysis begins in this region, earlier than previously thought. Left premotor ventral (superior subdivision) and dorsal areas were preferentially activated with the familiar language independently of time-rendering, challenging the view that the role of these regions in speech processing is purely articulatory. The left premotor ventral region in the frontal operculum, thought to include part of the Broca's area, responded to the natural familiar language, consistent with the hypothesis of motor simulation of speech gestures.
Collapse
Affiliation(s)
- Vincenzo Maffei
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
- Data Lake & BI, DOT - Technology, Poste Italiane, Rome, Italy
| | - Iole Indovina
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Departmental Faculty of Medicine and Surgery, Saint Camillus International University of Health and Medical Sciences, Rome, Italy
| | | | - Maria Assunta Giusti
- Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Emiliano Macaluso
- ImpAct Team, Lyon Neuroscience Research Center, Lyon, France
- Laboratory of Neuroimaging, IRCCS Santa Lucia Foundation, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| | - Paolo Viviani
- Laboratory of Neuromotor Physiology, IRCCS Santa Lucia Foundation, Rome, Italy
- Centre of Space BioMedicine and Department of Systems Medicine, University of Rome Tor Vergata, Rome, Italy
| |
Collapse
|
11
|
Different neural representations for detection of symmetry in dot-patterns and in faces: A state-dependent TMS study. Neuropsychologia 2020; 138:107333. [DOI: 10.1016/j.neuropsychologia.2020.107333] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2019] [Revised: 12/05/2019] [Accepted: 01/06/2020] [Indexed: 11/19/2022]
|
12
|
Jenson D, Thornton D, Harkrider AW, Saltuklaroglu T. Influences of cognitive load on sensorimotor contributions to working memory: An EEG investigation of mu rhythm activity during speech discrimination. Neurobiol Learn Mem 2019; 166:107098. [DOI: 10.1016/j.nlm.2019.107098] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 09/11/2019] [Accepted: 10/09/2019] [Indexed: 11/16/2022]
|
13
|
Abstract
Recent evidence suggests that the motor system may have a facilitatory role in speech perception during noisy listening conditions. Studies clearly show an association between activity in auditory and motor speech systems, but also hint at a causal role for the motor system in noisy speech perception. However, in the most compelling "causal" studies performance was only measured at a single signal-to-noise ratio (SNR). If listening conditions must be noisy to invoke causal motor involvement, then effects will be contingent on the SNR at which they are tested. We used articulatory suppression to disrupt motor-speech areas while measuring phonemic identification across a range of SNRs. As controls, we also measured phoneme identification during passive listening, mandible gesturing, and foot-tapping conditions. Two-parameter (threshold, slope) psychometric functions were fit to the data in each condition. Our findings indicate: (1) no effect of experimental task on psychometric function slopes; (2) a small effect of articulatory suppression, in particular, on psychometric function thresholds. The size of the latter effect was 1 dB (~5% correct) on average, suggesting, at best, a minor modulatory role of the speech motor system in perception.
Collapse
Affiliation(s)
- Ryan C Stokes
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA.
| | - Jonathan H Venezia
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| | - Gregory Hickok
- Department of Cognitive Sciences Social and Behavioral Sciences Gateway, University of California - Irvine, Irvine, CA, 92697-5100, USA
| |
Collapse
|
14
|
Thornton D, Harkrider AW, Jenson DE, Saltuklaroglu T. Sex differences in early sensorimotor processing for speech discrimination. Sci Rep 2019; 9:392. [PMID: 30674942 PMCID: PMC6344575 DOI: 10.1038/s41598-018-36775-5] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2018] [Accepted: 11/12/2018] [Indexed: 11/08/2022] Open
Abstract
Sensorimotor activity in speech perception tasks varies as a function of context, cognitive load, and cognitive ability. This study investigated listener sex as an additional variable. Raw EEG data were collected as 21 males and 21 females discriminated /ba/ and /da/ in quiet and noisy backgrounds. Independent component analyses of data from accurately discriminated trials identified sensorimotor mu components with characteristic alpha and beta peaks from 16 members of each sex. Time-frequency decompositions showed that in quiet discrimination, females displayed stronger early mu-alpha synchronization, whereas males showed stronger mu-beta desynchronization. Findings indicate that early attentional mechanisms for speech discrimination were characterized by sensorimotor inhibition in females and predictive sensorimotor activation in males. Both sexes showed stronger early sensorimotor inhibition in noisy discrimination conditions versus in quiet, suggesting sensory gating of the noise. However, the difference in neural activation between quiet and noisy conditions was greater in males than females. Though sex differences appear unrelated to behavioral accuracy, they suggest that males and females exhibit early sensorimotor processing for speech discrimination that is fundamentally different, yet similarly adaptable to adverse conditions. Findings have implications for understanding variability in neuroimaging data and the male prevalence in various neurodevelopmental disorders with inhibitory dysfunction.
Collapse
Affiliation(s)
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Knoxville, TN, 37996, USA
| | - David E Jenson
- Elson S. Floyd College of Medicine, Washington State University, Spokane, WA, 99202, USA
| | - Tim Saltuklaroglu
- University of Tennessee Health Science Center, Knoxville, TN, 37996, USA
| |
Collapse
|
15
|
Liebenthal E, Möttönen R. An interactive model of auditory-motor speech perception. BRAIN AND LANGUAGE 2018; 187:33-40. [PMID: 29268943 PMCID: PMC6005717 DOI: 10.1016/j.bandl.2017.12.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 10/03/2017] [Accepted: 12/02/2017] [Indexed: 05/30/2023]
Abstract
Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.
Collapse
Affiliation(s)
- Einat Liebenthal
- Department of Psychiatry, Brigham & Women's Hospital, Harvard Medical School, Boston, USA.
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford, UK; School of Psychology, University of Nottingham, Nottingham, UK
| |
Collapse
|
16
|
Glanz Iljina O, Derix J, Kaur R, Schulze-Bonhage A, Auer P, Aertsen A, Ball T. Real-life speech production and perception have a shared premotor-cortical substrate. Sci Rep 2018; 8:8898. [PMID: 29891885 PMCID: PMC5995900 DOI: 10.1038/s41598-018-26801-x] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2017] [Accepted: 05/09/2018] [Indexed: 11/25/2022] Open
Abstract
Motor-cognitive accounts assume that the articulatory cortex is involved in language comprehension, but previous studies may have observed such an involvement as an artefact of experimental procedures. Here, we employed electrocorticography (ECoG) during natural, non-experimental behavior combined with electrocortical stimulation mapping to study the neural basis of real-life human verbal communication. We took advantage of ECoG’s ability to capture high-gamma activity (70–350 Hz) as a spatially and temporally precise index of cortical activation during unconstrained, naturalistic speech production and perception conditions. Our findings show that an electrostimulation-defined mouth motor region located in the superior ventral premotor cortex is consistently activated during both conditions. This region became active early relative to the onset of speech production and was recruited during speech perception regardless of acoustic background noise. Our study thus pinpoints a shared ventral premotor substrate for real-life speech production and perception with its basic properties.
Collapse
Affiliation(s)
- Olga Glanz Iljina
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany. .,Department of German Linguistics, University of Freiburg, Freiburg, Germany. .,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany. .,Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.
| | - Johanna Derix
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany
| | - Rajbir Kaur
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Faculty of Medicine, University of Cologne, Cologne, Germany
| | - Andreas Schulze-Bonhage
- BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany.,Epilepsy Center, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Peter Auer
- GRK 1624 'Frequency Effects in Language', University of Freiburg, Freiburg, Germany.,Department of German Linguistics, University of Freiburg, Freiburg, Germany.,Hermann Paul School of Linguistics, University of Freiburg, Freiburg, Germany
| | - Ad Aertsen
- Neurobiology and Biophysics, Faculty of Biology, University of Freiburg, Freiburg, Germany.,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany
| | - Tonio Ball
- Translational Neurotechnology Lab, Department of Neurosurgery, Medical Center - University of Freiburg, Faculty of Medicine, University of Freiburg, Freiburg, Germany. .,BrainLinks-BrainTools, University of Freiburg, Freiburg, Germany. .,Bernstein Center Freiburg, University of Freiburg, Freiburg, Germany.
| |
Collapse
|
17
|
Neural networks supporting audiovisual integration for speech: A large-scale lesion study. Cortex 2018; 103:360-371. [PMID: 29705718 DOI: 10.1016/j.cortex.2018.03.030] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2017] [Revised: 12/05/2017] [Accepted: 03/23/2018] [Indexed: 10/17/2022]
Abstract
Auditory and visual speech information are often strongly integrated resulting in perceptual enhancements for audiovisual (AV) speech over audio alone and sometimes yielding compelling illusory fusion percepts when AV cues are mismatched, the McGurk-MacDonald effect. Previous research has identified three candidate regions thought to be critical for AV speech integration: the posterior superior temporal sulcus (STS), early auditory cortex, and the posterior inferior frontal gyrus. We assess the causal involvement of these regions (and others) in the first large-scale (N = 100) lesion-based study of AV speech integration. Two primary findings emerged. First, behavioral performance and lesion maps for AV enhancement and illusory fusion measures indicate that classic metrics of AV speech integration are not necessarily measuring the same process. Second, lesions involving superior temporal auditory, lateral occipital visual, and multisensory zones in the STS are the most disruptive to AV speech integration. Further, when AV speech integration fails, the nature of the failure-auditory vs visual capture-can be predicted from the location of the lesions. These findings show that AV speech processing is supported by unimodal auditory and visual cortices as well as multimodal regions such as the STS at their boundary. Motor related frontal regions do not appear to play a role in AV speech integration.
Collapse
|
18
|
Xie X, Myers E. Left Inferior Frontal Gyrus Sensitivity to Phonetic Competition in Receptive Language Processing: A Comparison of Clear and Conversational Speech. J Cogn Neurosci 2017; 30:267-280. [PMID: 29160743 DOI: 10.1162/jocn_a_01208] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022]
Abstract
The speech signal is rife with variations in phonetic ambiguity. For instance, when talkers speak in a conversational register, they demonstrate less articulatory precision, leading to greater potential for confusability at the phonetic level compared with a clear speech register. Current psycholinguistic models assume that ambiguous speech sounds activate more than one phonological category and that competition at prelexical levels cascades to lexical levels of processing. Imaging studies have shown that the left inferior frontal gyrus (LIFG) is modulated by phonetic competition between simultaneously activated categories, with increases in activation for more ambiguous tokens. Yet, these studies have often used artificially manipulated speech and/or metalinguistic tasks, which arguably may recruit neural regions that are not critical for natural speech recognition. Indeed, a prominent model of speech processing, the dual-stream model, posits that the LIFG is not involved in prelexical processing in receptive language processing. In the current study, we exploited natural variation in phonetic competition in the speech signal to investigate the neural systems sensitive to phonetic competition as listeners engage in a receptive language task. Participants heard nonsense sentences spoken in either a clear or conversational register as neural activity was monitored using fMRI. Conversational sentences contained greater phonetic competition, as estimated by measures of vowel confusability, and these sentences also elicited greater activation in a region in the LIFG. Sentence-level phonetic competition metrics uniquely correlated with LIFG activity as well. This finding is consistent with the hypothesis that the LIFG responds to competition at multiple levels of language processing and that recruitment of this region does not require an explicit phonological judgment.
Collapse
|
19
|
Rogalsky C, LaCroix AN, Chen KH, Anderson SW, Damasio H, Love T, Hickok G. The Neurobiology of Agrammatic Sentence Comprehension: A Lesion Study. J Cogn Neurosci 2017; 30:234-255. [PMID: 29064339 DOI: 10.1162/jocn_a_01200] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Broca's area has long been implicated in sentence comprehension. Damage to this region is thought to be the central source of "agrammatic comprehension" in which performance is substantially worse (and near chance) on sentences with noncanonical word orders compared with canonical word order sentences (in English). This claim is supported by functional neuroimaging studies demonstrating greater activation in Broca's area for noncanonical versus canonical sentences. However, functional neuroimaging studies also have frequently implicated the anterior temporal lobe (ATL) in sentence processing more broadly, and recent lesion-symptom mapping studies have implicated the ATL and mid temporal regions in agrammatic comprehension. This study investigates these seemingly conflicting findings in 66 left-hemisphere patients with chronic focal cerebral damage. Patients completed two sentence comprehension measures, sentence-picture matching and plausibility judgments. Patients with damage including Broca's area (but excluding the temporal lobe; n = 11) on average did not exhibit the expected agrammatic comprehension pattern-for example, their performance was >80% on noncanonical sentences in the sentence-picture matching task. Patients with ATL damage ( n = 18) also did not exhibit an agrammatic comprehension pattern. Across our entire patient sample, the lesions of patients with agrammatic comprehension patterns in either task had maximal overlap in posterior superior temporal and inferior parietal regions. Using voxel-based lesion-symptom mapping, we find that lower performances on canonical and noncanonical sentences in each task are both associated with damage to a large left superior temporal-inferior parietal network including portions of the ATL, but not Broca's area. Notably, however, response bias in plausibility judgments was significantly associated with damage to inferior frontal cortex, including gray and white matter in Broca's area, suggesting that the contribution of Broca's area to sentence comprehension may be related to task-related cognitive demands.
Collapse
Affiliation(s)
| | | | - Kuan-Hua Chen
- University of Iowa.,University of California, Berkeley
| | | | | | | | | |
Collapse
|
20
|
Pulvermüller F. Neural reuse of action perception circuits for language, concepts and communication. Prog Neurobiol 2017; 160:1-44. [PMID: 28734837 DOI: 10.1016/j.pneurobio.2017.07.001] [Citation(s) in RCA: 124] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2017] [Revised: 05/12/2017] [Accepted: 07/13/2017] [Indexed: 10/19/2022]
Abstract
Neurocognitive and neurolinguistics theories make explicit statements relating specialized cognitive and linguistic processes to specific brain loci. These linking hypotheses are in need of neurobiological justification and explanation. Recent mathematical models of human language mechanisms constrained by fundamental neuroscience principles and established knowledge about comparative neuroanatomy offer explanations for where, when and how language is processed in the human brain. In these models, network structure and connectivity along with action- and perception-induced correlation of neuronal activity co-determine neurocognitive mechanisms. Language learning leads to the formation of action perception circuits (APCs) with specific distributions across cortical areas. Cognitive and linguistic processes such as speech production, comprehension, verbal working memory and prediction are modelled by activity dynamics in these APCs, and combinatorial and communicative-interactive knowledge is organized in the dynamics within, and connections between APCs. The network models and, in particular, the concept of distributionally-specific circuits, can account for some previously not well understood facts about the cortical 'hubs' for semantic processing and the motor system's role in language understanding and speech sound recognition. A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings. Throughout, recent disputes about the role of mirror neurons and grounded cognition in language and communication are assessed critically.
Collapse
Affiliation(s)
- Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy & Humanities, WE4, Freie Universität Berlin, 14195 Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10099 Berlin, Germany; Einstein Center for Neurosciences, Berlin 10117 Berlin, Germany.
| |
Collapse
|
21
|
Saltuklaroglu T, Harkrider AW, Thornton D, Jenson D, Kittilstved T. EEG Mu (µ) rhythm spectra and oscillatory activity differentiate stuttering from non-stuttering adults. Neuroimage 2017; 153:232-245. [PMID: 28400266 PMCID: PMC5569894 DOI: 10.1016/j.neuroimage.2017.04.022] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2016] [Revised: 01/24/2017] [Accepted: 04/08/2017] [Indexed: 10/19/2022] Open
Abstract
Stuttering is linked to sensorimotor deficits related to internal modeling mechanisms. This study compared spectral power and oscillatory activity of EEG mu (μ) rhythms between persons who stutter (PWS) and controls in listening and auditory discrimination tasks. EEG data were analyzed from passive listening in noise and accurate (same/different) discrimination of tones or syllables in quiet and noisy backgrounds. Independent component analysis identified left and/or right μ rhythms with characteristic alpha (α) and beta (β) peaks localized to premotor/motor regions in 23 of 27 people who stutter (PWS) and 24 of 27 controls. PWS produced μ spectra with reduced β amplitudes across conditions, suggesting reduced forward modeling capacity. Group time-frequency differences were associated with noisy conditions only. PWS showed increased μ-β desynchronization when listening to noise and early in discrimination events, suggesting evidence of heightened motor activity that might be related to forward modeling deficits. PWS also showed reduced μ-α synchronization in discrimination conditions, indicating reduced sensory gating. Together these findings indicate spectral and oscillatory analyses of μ rhythms are sensitive to stuttering. More specifically, they can reveal stuttering-related sensorimotor processing differences in listening and auditory discrimination that also may be influenced by basal ganglia deficits.
Collapse
Affiliation(s)
- Tim Saltuklaroglu
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Ashley W Harkrider
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA.
| | - David Thornton
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - David Jenson
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| | - Tiffani Kittilstved
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, USA
| |
Collapse
|
22
|
Treille A, Vilain C, Hueber T, Lamalle L, Sato M. Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions. J Cogn Neurosci 2017; 29:448-466. [DOI: 10.1162/jocn_a_01057] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Action recognition has been found to rely not only on sensory brain areas but also partly on the observer's motor system. However, whether distinct auditory and visual experiences of an action modulate sensorimotor activity remains largely unknown. In the present sparse sampling fMRI study, we determined to which extent sensory and motor representations interact during the perception of tongue and lip speech actions. Tongue and lip speech actions were selected because tongue movements of our interlocutor are accessible via their impact on speech acoustics but not visible because of its position inside the vocal tract, whereas lip movements are both “audible” and visible. Participants were presented with auditory, visual, and audiovisual speech actions, with the visual inputs related to either a sagittal view of the tongue movements or a facial view of the lip movements of a speaker, previously recorded by an ultrasound imaging system and a video camera. Although the neural networks involved in visual visuolingual and visuofacial perception largely overlapped, stronger motor and somatosensory activations were observed during visuolingual perception. In contrast, stronger activity was found in auditory and visual cortices during visuofacial perception. Complementing these findings, activity in the left premotor cortex and in visual brain areas was found to correlate with visual recognition scores observed for visuolingual and visuofacial speech stimuli, respectively, whereas visual activity correlated with RTs for both stimuli. These results suggest that unimodal and multimodal processing of lip and tongue speech actions rely on common sensorimotor brain areas. They also suggest that visual processing of audible but not visible movements induces motor and visual mental simulation of the perceived actions to facilitate recognition and/or to learn the association between auditory and visual signals.
Collapse
Affiliation(s)
| | | | | | - Laurent Lamalle
- 2Université Grenoble-Alpes & CHU de Grenoble
- 3CNRS UMS 3552, Grenoble, France
| | - Marc Sato
- 4CNRS UMR 7309 & Aix-Marseille Université
| |
Collapse
|
23
|
Skipper JI, Devlin JT, Lametti DR. The hearing ear is always found close to the speaking tongue: Review of the role of the motor system in speech perception. BRAIN AND LANGUAGE 2017; 164:77-105. [PMID: 27821280 DOI: 10.1016/j.bandl.2016.10.004] [Citation(s) in RCA: 126] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/17/2016] [Accepted: 10/24/2016] [Indexed: 06/06/2023]
Abstract
Does "the motor system" play "a role" in speech perception? If so, where, how, and when? We conducted a systematic review that addresses these questions using both qualitative and quantitative methods. The qualitative review of behavioural, computational modelling, non-human animal, brain damage/disorder, electrical stimulation/recording, and neuroimaging research suggests that distributed brain regions involved in producing speech play specific, dynamic, and contextually determined roles in speech perception. The quantitative review employed region and network based neuroimaging meta-analyses and a novel text mining method to describe relative contributions of nodes in distributed brain networks. Supporting the qualitative review, results show a specific functional correspondence between regions involved in non-linguistic movement of the articulators, covertly and overtly producing speech, and the perception of both nonword and word sounds. This distributed set of cortical and subcortical speech production regions are ubiquitously active and form multiple networks whose topologies dynamically change with listening context. Results are inconsistent with motor and acoustic only models of speech perception and classical and contemporary dual-stream models of the organization of language and the brain. Instead, results are more consistent with complex network models in which multiple speech production related networks and subnetworks dynamically self-organize to constrain interpretation of indeterminant acoustic patterns as listening context requires.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Experimental Psychology, University College London, United Kingdom.
| | - Joseph T Devlin
- Experimental Psychology, University College London, United Kingdom
| | - Daniel R Lametti
- Experimental Psychology, University College London, United Kingdom; Department of Experimental Psychology, University of Oxford, United Kingdom
| |
Collapse
|
24
|
Rosenblum LD, Dorsi J, Dias JW. The Impact and Status of Carol Fowler's Supramodal Theory of Multisensory Speech Perception. ECOLOGICAL PSYCHOLOGY 2016. [DOI: 10.1080/10407413.2016.1230373] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
|
25
|
Venezia JH, Fillmore P, Matchin W, Isenberg AL, Hickok G, Fridriksson J. Perception drives production across sensory modalities: A network for sensorimotor integration of visual speech. Neuroimage 2016; 126:196-207. [PMID: 26608242 PMCID: PMC4733636 DOI: 10.1016/j.neuroimage.2015.11.038] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2015] [Revised: 11/09/2015] [Accepted: 11/15/2015] [Indexed: 11/22/2022] Open
Abstract
Sensory information is critical for movement control, both for defining the targets of actions and providing feedback during planning or ongoing movements. This holds for speech motor control as well, where both auditory and somatosensory information have been shown to play a key role. Recent clinical research demonstrates that individuals with severe speech production deficits can show a dramatic improvement in fluency during online mimicking of an audiovisual speech signal suggesting the existence of a visuomotor pathway for speech motor control. Here we used fMRI in healthy individuals to identify this new visuomotor circuit for speech production. Participants were asked to perceive and covertly rehearse nonsense syllable sequences presented auditorily, visually, or audiovisually. The motor act of rehearsal, which is prima facie the same whether or not it is cued with a visible talker, produced different patterns of sensorimotor activation when cued by visual or audiovisual speech (relative to auditory speech). In particular, a network of brain regions including the left posterior middle temporal gyrus and several frontoparietal sensorimotor areas activated more strongly during rehearsal cued by a visible talker versus rehearsal cued by auditory speech alone. Some of these brain regions responded exclusively to rehearsal cued by visual or audiovisual speech. This result has significant implications for models of speech motor control, for the treatment of speech output disorders, and for models of the role of speech gesture imitation in development.
Collapse
Affiliation(s)
- Jonathan H Venezia
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States.
| | - Paul Fillmore
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX 76798, United States
| | - William Matchin
- Department of Linguistics, University of Maryland, College Park, MD 20742, United States
| | - A Lisette Isenberg
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, Irvine, CA 92697, United States
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC 29208, United States
| |
Collapse
|
26
|
Choi US, Sung YW, Hong S, Chung JY, Ogawa S. Structural and functional plasticity specific to musical training with wind instruments. Front Hum Neurosci 2015; 9:597. [PMID: 26578939 PMCID: PMC4624850 DOI: 10.3389/fnhum.2015.00597] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2015] [Accepted: 10/14/2015] [Indexed: 01/19/2023] Open
Abstract
Numerous neuroimaging studies have shown structural and functional changes resulting from musical training. Among these studies, changes in primary sensory areas are mostly related to motor functions. In this study, we looked for some similar functional and structural changes in other functional modalities, such as somatosensory function, by examining the effects of musical training with wind instruments. We found significant changes in two aspects of neuroplasticity, cortical thickness, and resting-state neuronal networks. A group of subjects with several years of continuous musical training and who are currently playing in university wind ensembles showed differences in cortical thickness in lip- and tongue-related brain areas vs. non-music playing subjects. Cortical thickness in lip-related brain areas was significantly thicker and that in tongue-related areas was significantly thinner in the music playing group compared with that in the non-music playing group. Association analysis of lip-related areas in the music playing group showed that the increase in cortical thickness was caused by musical training. In addition, seed-based correlation analysis showed differential activation in the precentral gyrus and supplementary motor areas (SMA) between the music and non-music playing groups. These results suggest that high-intensity training with specific musical instruments could induce structural changes in related anatomical areas and could also generate a new functional neuronal network in the brain.
Collapse
Affiliation(s)
- Uk-Su Choi
- Neuroscience Research Institute, Gachon University of Medicine and Science Incheon, South Korea
| | - Yul-Wan Sung
- Kansei Fukushi Research Institute, Tohoku Fukushi University Sendai, Japan
| | - Sujin Hong
- Reid School of Music, Edinburgh College of Art, Institute for Music and Human Society Development, University of Edinburgh Edinburgh, UK
| | - Jun-Young Chung
- Neuroscience Research Institute, Gachon University of Medicine and Science Incheon, South Korea
| | - Seiji Ogawa
- Kansei Fukushi Research Institute, Tohoku Fukushi University Sendai, Japan
| |
Collapse
|
27
|
Rhone AE, Nourski KV, Oya H, Kawasaki H, Howard MA, McMurray B. Can you hear me yet? An intracranial investigation of speech and non-speech audiovisual interactions in human cortex. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 31:284-302. [PMID: 27182530 PMCID: PMC4865257 DOI: 10.1080/23273798.2015.1101145] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In everyday conversation, viewing a talker's face can provide information about the timing and content of an upcoming speech signal, resulting in improved intelligibility. Using electrocorticography, we tested whether human auditory cortex in Heschl's gyrus (HG) and on superior temporal gyrus (STG) and motor cortex on precentral gyrus (PreC) were responsive to visual/gestural information prior to the onset of sound and whether early stages of auditory processing were sensitive to the visual content (speech syllable versus non-speech motion). Event-related band power (ERBP) in the high gamma band was content-specific prior to acoustic onset on STG and PreC, and ERBP in the beta band differed in all three areas. Following sound onset, we found with no evidence for content-specificity in HG, evidence for visual specificity in PreC, and specificity for both modalities in STG. These results support models of audio-visual processing in which sensory information is integrated in non-primary cortical areas.
Collapse
|
28
|
Jenson D, Harkrider AW, Thornton D, Bowers AL, Saltuklaroglu T. Auditory cortical deactivation during speech production and following speech perception: an EEG investigation of the temporal dynamics of the auditory alpha rhythm. Front Hum Neurosci 2015; 9:534. [PMID: 26500519 PMCID: PMC4597480 DOI: 10.3389/fnhum.2015.00534] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 09/14/2015] [Indexed: 11/22/2022] Open
Abstract
Sensorimotor integration (SMI) across the dorsal stream enables online monitoring of speech. Jenson et al. (2014) used independent component analysis (ICA) and event related spectral perturbation (ERSP) analysis of electroencephalography (EEG) data to describe anterior sensorimotor (e.g., premotor cortex, PMC) activity during speech perception and production. The purpose of the current study was to identify and temporally map neural activity from posterior (i.e., auditory) regions of the dorsal stream in the same tasks. Perception tasks required "active" discrimination of syllable pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required overt production of syllable pairs and nouns. ICA performed on concatenated raw 68 channel EEG data from all tasks identified bilateral "auditory" alpha (α) components in 15 of 29 participants localized to pSTG (left) and pMTG (right). ERSP analyses were performed to reveal fluctuations in the spectral power of the α rhythm clusters across time. Production conditions were characterized by significant α event related synchronization (ERS; pFDR < 0.05) concurrent with EMG activity from speech production, consistent with speech-induced auditory inhibition. Discrimination conditions were also characterized by α ERS following stimulus offset. Auditory α ERS in all conditions temporally aligned with PMC activity reported in Jenson et al. (2014). These findings are indicative of speech-induced suppression of auditory regions, possibly via efference copy. The presence of the same pattern following stimulus offset in discrimination conditions suggests that sensorimotor contributions following speech perception reflect covert replay, and that covert replay provides one source of the motor activity previously observed in some speech perception tasks. To our knowledge, this is the first time that inhibition of auditory regions by speech has been observed in real-time with the ICA/ERSP technique.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
29
|
Adank P, Nuttall HE, Banks B, Kennedy-Higgins D. Neural bases of accented speech perception. Front Hum Neurosci 2015; 9:558. [PMID: 26500526 PMCID: PMC4594029 DOI: 10.3389/fnhum.2015.00558] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2015] [Accepted: 09/22/2015] [Indexed: 02/02/2023] Open
Abstract
The recognition of unfamiliar regional and foreign accents represents a challenging task for the speech perception system (Floccia et al., 2006; Adank et al., 2009). Despite the frequency with which we encounter such accents, the neural mechanisms supporting successful perception of accented speech are poorly understood. Nonetheless, candidate neural substrates involved in processing speech in challenging listening conditions, including accented speech, are beginning to be identified. This review will outline neural bases associated with perception of accented speech in the light of current models of speech perception, and compare these data to brain areas associated with processing other speech distortions. We will subsequently evaluate competing models of speech processing with regards to neural processing of accented speech. See Cristia et al. (2012) for an in-depth overview of behavioral aspects of accent processing.
Collapse
Affiliation(s)
- Patti Adank
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK ; School of Psychological Sciences, University of Manchester Manchester, UK
| | - Helen E Nuttall
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| | - Briony Banks
- School of Psychological Sciences, University of Manchester Manchester, UK
| | - Daniel Kennedy-Higgins
- Division of Psychology and Language Sciences, Department of Speech, Hearing, and Phonetic Sciences, University College London London, UK
| |
Collapse
|
30
|
Smalle EHM, Rogers J, Möttönen R. Dissociating Contributions of the Motor Cortex to Speech Perception and Response Bias by Using Transcranial Magnetic Stimulation. Cereb Cortex 2015; 25:3690-8. [PMID: 25274987 PMCID: PMC4585509 DOI: 10.1093/cercor/bhu218] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Recent studies using repetitive transcranial magnetic stimulation (TMS) have demonstrated that disruptions of the articulatory motor cortex impair performance in demanding speech perception tasks. These findings have been interpreted as support for the idea that the motor cortex is critically involved in speech perception. However, the validity of this interpretation has been called into question, because it is unknown whether the TMS-induced disruptions in the motor cortex affect speech perception or rather response bias. In the present TMS study, we addressed this question by using signal detection theory to calculate sensitivity (i.e., d') and response bias (i.e., criterion c). We used repetitive TMS to temporarily disrupt the lip or hand representation in the left motor cortex. Participants discriminated pairs of sounds from a "ba"-"da" continuum before TMS, immediately after TMS (i.e., during the period of motor disruption), and after a 30-min break. We found that the sensitivity for between-category pairs was reduced during the disruption of the lip representation. In contrast, disruption of the hand representation temporarily reduced response bias. This double dissociation indicates that the hand motor cortex contributes to response bias during demanding discrimination tasks, whereas the articulatory motor cortex contributes to perception of speech sounds.
Collapse
Affiliation(s)
- Eleonore H. M. Smalle
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
- Psychological Sciences Research Institute, Institute of Neuroscience, Université Catholique de Louvain, B-1348 Louvain-la-Neuve, Belgium
| | - Jack Rogers
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| | - Riikka Möttönen
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, UK
| |
Collapse
|
31
|
Meltzer-Asscher A, Mack JE, Barbieri E, Thompson CK. How the brain processes different dimensions of argument structure complexity: evidence from fMRI. BRAIN AND LANGUAGE 2015; 142:65-75. [PMID: 25658635 PMCID: PMC4336802 DOI: 10.1016/j.bandl.2014.12.005] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2013] [Revised: 11/12/2014] [Accepted: 12/22/2014] [Indexed: 06/04/2023]
Abstract
Verbs are central to sentence processing, as they encode argument structure (AS) information, i.e., information about the syntax and interpretation of the phrases accompanying them. The behavioral and neural correlates of AS processing have primarily been investigated in sentence-level tasks, requiring both verb processing and verb-argument integration. In the current functional magnetic resonance imaging (fMRI) study, we investigated AS processing using a lexical decision task requiring only verb processing. We examined three aspects of AS complexity: number of thematic roles, number of thematic options, and mapping (non)canonicity (unaccusative vs. unergative and transitive verbs). Increased number of thematic roles elicited greater activation in the left posterior perisylvian regions claimed to support access to stored AS representations. However, the number of thematic options had no neural effects. Further, unaccusative verbs elicited longer response times and increased activation in the left inferior frontal gyrus, reflecting the processing cost of unaccusative verbs and, more generally, supporting the role of the IFG in noncanonical argument mapping.
Collapse
Affiliation(s)
- Aya Meltzer-Asscher
- Linguistics Department, Tel Aviv University, Israel; Sagol School of Neuroscience, Tel Aviv University, Israel.
| | - Jennifer E Mack
- Department of Communication Sciences and Disorders, Northwestern University, United States
| | - Elena Barbieri
- Department of Communication Sciences and Disorders, Northwestern University, United States
| | - Cynthia K Thompson
- Department of Communication Sciences and Disorders, Northwestern University, United States; Department of Neurology, Northwestern University, United States; Cognitive Neurology and Alzheimer's Disease Center, Northwestern University, United States
| |
Collapse
|
32
|
Simonyan K, Fuertinger S. Speech networks at rest and in action: interactions between functional brain networks controlling speech production. J Neurophysiol 2015; 113:2967-78. [PMID: 25673742 DOI: 10.1152/jn.00964.2014] [Citation(s) in RCA: 50] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2014] [Accepted: 02/06/2015] [Indexed: 01/08/2023] Open
Abstract
Speech production is one of the most complex human behaviors. Although brain activation during speaking has been well investigated, our understanding of interactions between the brain regions and neural networks remains scarce. We combined seed-based interregional correlation analysis with graph theoretical analysis of functional MRI data during the resting state and sentence production in healthy subjects to investigate the interface and topology of functional networks originating from the key brain regions controlling speech, i.e., the laryngeal/orofacial motor cortex, inferior frontal and superior temporal gyri, supplementary motor area, cingulate cortex, putamen, and thalamus. During both resting and speaking, the interactions between these networks were bilaterally distributed and centered on the sensorimotor brain regions. However, speech production preferentially recruited the inferior parietal lobule (IPL) and cerebellum into the large-scale network, suggesting the importance of these regions in facilitation of the transition from the resting state to speaking. Furthermore, the cerebellum (lobule VI) was the most prominent region showing functional influences on speech-network integration and segregation. Although networks were bilaterally distributed, interregional connectivity during speaking was stronger in the left vs. right hemisphere, which may have underlined a more homogeneous overlap between the examined networks in the left hemisphere. Among these, the laryngeal motor cortex (LMC) established a core network that fully overlapped with all other speech-related networks, determining the extent of network interactions. Our data demonstrate complex interactions of large-scale brain networks controlling speech production and point to the critical role of the LMC, IPL, and cerebellum in the formation of speech production network.
Collapse
Affiliation(s)
- Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York; Department Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York
| | - Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York
| |
Collapse
|
33
|
Bernstein LE, Liebenthal E. Neural pathways for visual speech perception. Front Neurosci 2014; 8:386. [PMID: 25520611 PMCID: PMC4248808 DOI: 10.3389/fnins.2014.00386] [Citation(s) in RCA: 87] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2014] [Accepted: 11/10/2014] [Indexed: 12/03/2022] Open
Abstract
This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.
Collapse
Affiliation(s)
- Lynne E Bernstein
- Department of Speech and Hearing Sciences, George Washington University Washington, DC, USA
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA ; Department of Psychiatry, Brigham and Women's Hospital Boston, MA, USA
| |
Collapse
|
34
|
Schomers MR, Kirilina E, Weigand A, Bajbouj M, Pulvermüller F. Causal Influence of Articulatory Motor Cortex on Comprehending Single Spoken Words: TMS Evidence. Cereb Cortex 2014; 25:3894-902. [PMID: 25452575 PMCID: PMC4585521 DOI: 10.1093/cercor/bhu274] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Classic wisdom had been that motor and premotor cortex contribute to motor execution but not to higher cognition and language comprehension. In contrast, mounting evidence from neuroimaging, patient research, and transcranial magnetic stimulation (TMS) suggest sensorimotor interaction and, specifically, that the articulatory motor cortex is important for classifying meaningless speech sounds into phonemic categories. However, whether these findings speak to the comprehension issue is unclear, because language comprehension does not require explicit phonemic classification and previous results may therefore relate to factors alien to semantic understanding. We here used the standard psycholinguistic test of spoken word comprehension, the word-to-picture-matching task, and concordant TMS to articulatory motor cortex. TMS pulses were applied to primary motor cortex controlling either the lips or the tongue as subjects heard critical word stimuli starting with bilabial lip-related or alveolar tongue-related stop consonants (e.g., “pool” or “tool”). A significant cross-over interaction showed that articulatory motor cortex stimulation delayed comprehension responses for phonologically incongruent words relative to congruous ones (i.e., lip area TMS delayed “tool” relative to “pool” responses). As local TMS to articulatory motor areas differentially delays the comprehension of phonologically incongruous spoken words, we conclude that motor systems can take a causal role in semantic comprehension and, hence, higher cognition.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| | - Evgeniya Kirilina
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany
| | - Anne Weigand
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany Department of Psychiatry, Charité Universitätsmedizin Berlin, 14050 Berlin, Germany Berenson-Allen Center for Noninvasive Brain Stimulation, Beth Israel Deaconess Medical Center and Harvard Medical School, Boston, MA 02215, USA
| | - Malek Bajbouj
- Dahlem Institute for Neuroimaging of Emotion, Freie Universität Berlin, 14195 Berlin, Germany Department of Psychiatry, Charité Universitätsmedizin Berlin, 14050 Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, 14195 Berlin, Germany Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, 10099 Berlin, Germany
| |
Collapse
|
35
|
Jenson D, Bowers AL, Harkrider AW, Thornton D, Cuellar M, Saltuklaroglu T. Temporal dynamics of sensorimotor integration in speech perception and production: independent component analysis of EEG data. Front Psychol 2014; 5:656. [PMID: 25071633 PMCID: PMC4091311 DOI: 10.3389/fpsyg.2014.00656] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2014] [Accepted: 06/08/2014] [Indexed: 11/17/2022] Open
Abstract
Activity in anterior sensorimotor regions is found in speech production and some perception tasks. Yet, how sensorimotor integration supports these functions is unclear due to a lack of data examining the timing of activity from these regions. Beta (~20 Hz) and alpha (~10 Hz) spectral power within the EEG μ rhythm are considered indices of motor and somatosensory activity, respectively. In the current study, perception conditions required discrimination (same/different) of syllables pairs (/ba/ and /da/) in quiet and noisy conditions. Production conditions required covert and overt syllable productions and overt word production. Independent component analysis was performed on EEG data obtained during these conditions to (1) identify clusters of μ components common to all conditions and (2) examine real-time event-related spectral perturbations (ERSP) within alpha and beta bands. 17 and 15 out of 20 participants produced left and right μ-components, respectively, localized to precentral gyri. Discrimination conditions were characterized by significant (pFDR < 0.05) early alpha event-related synchronization (ERS) prior to and during stimulus presentation and later alpha event-related desynchronization (ERD) following stimulus offset. Beta ERD began early and gained strength across time. Differences were found between quiet and noisy discrimination conditions. Both overt syllable and word productions yielded similar alpha/beta ERD that began prior to production and was strongest during muscle activity. Findings during covert production were weaker than during overt production. One explanation for these findings is that μ-beta ERD indexes early predictive coding (e.g., internal modeling) and/or overt and covert attentional/motor processes. μ-alpha ERS may index inhibitory input to the premotor cortex from sensory regions prior to and during discrimination, while μ-alpha ERD may index sensory feedback during speech rehearsal and production.
Collapse
Affiliation(s)
- David Jenson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Andrew L. Bowers
- Department of Communication Disorders, University of ArkansasFayetteville, AR, USA
| | - Ashley W. Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - David Thornton
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| | - Megan Cuellar
- Speech-Language Pathology Program, College of Health Sciences, Midwestern UniversityChicago, IL, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science CenterKnoxville, TN, USA
| |
Collapse
|
36
|
Abstract
The earliest stages of cortical processing of speech sounds take place in the auditory cortex. Transcranial magnetic stimulation (TMS) studies have provided evidence that the human articulatory motor cortex contributes also to speech processing. For example, stimulation of the motor lip representation influences specifically discrimination of lip-articulated speech sounds. However, the timing of the neural mechanisms underlying these articulator-specific motor contributions to speech processing is unknown. Furthermore, it is unclear whether they depend on attention. Here, we used magnetoencephalography and TMS to investigate the effect of attention on specificity and timing of interactions between the auditory and motor cortex during processing of speech sounds. We found that TMS-induced disruption of the motor lip representation modulated specifically the early auditory-cortex responses to lip-articulated speech sounds when they were attended. These articulator-specific modulations were left-lateralized and remarkably early, occurring 60-100 ms after sound onset. When speech sounds were ignored, the effect of this motor disruption on auditory-cortex responses was nonspecific and bilateral, and it started later, 170 ms after sound onset. The findings indicate that articulatory motor cortex can contribute to auditory processing of speech sounds even in the absence of behavioral tasks and when the sounds are not in the focus of attention. Importantly, the findings also show that attention can selectively facilitate the interaction of the auditory cortex with specific articulator representations during speech processing.
Collapse
|
37
|
Bowers AL, Saltuklaroglu T, Harkrider A, Wilson M, Toner MA. Dynamic modulation of shared sensory and motor cortical rhythms mediates speech and non-speech discrimination performance. Front Psychol 2014; 5:366. [PMID: 24847290 PMCID: PMC4019855 DOI: 10.3389/fpsyg.2014.00366] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2014] [Accepted: 04/07/2014] [Indexed: 01/17/2023] Open
Abstract
Oscillatory models of speech processing have proposed that rhythmic cortical oscillations in sensory and motor regions modulate speech sound processing from the bottom-up via phase reset at low frequencies (3-10 Hz) and from the top-down via the disinhibition of alpha/beta rhythms (8-30 Hz). To investigate how the proposed rhythms mediate perceptual performance, electroencephalographic (EEG) was recorded while participants passively listened to or actively identified speech and tone-sweeps in a two-force choice in noise discrimination task presented at high and low signal-to-noise ratios. EEG data were decomposed using independent component analysis and clustered across participants using principle component methods in EEGLAB. Left and right hemisphere sensorimotor and posterior temporal lobe clusters were identified. Alpha and beta suppression was associated with active tasks only in sensorimotor and temporal clusters. In posterior temporal clusters, increases in phase reset at low frequencies were driven by the quality of bottom-up acoustic information for speech and non-speech stimuli, whereas phase reset in sensorimotor clusters was associated with top-down active task demands. A comparison of correct discrimination trials to those identified at chance showed an earlier performance related effect for the left sensorimotor cluster relative to the left-temporal lobe cluster during the syllable discrimination task only. The right sensorimotor cluster was associated with performance related differences for tone-sweep stimuli only. Findings are consistent with internal model accounts suggesting that early efferent sensorimotor models transmitted along alpha and beta channels reflect a release from inhibition related to active attention to auditory discrimination. Results are discussed in the broader context of dynamic, oscillatory models of cognition proposing that top-down internally generated states interact with bottom-up sensory processing to enhance task performance.
Collapse
Affiliation(s)
- Andrew L Bowers
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| | - Tim Saltuklaroglu
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Ashley Harkrider
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville TN, USA
| | - Matt Wilson
- School of Allied Health, Northern Illinois University, DeKalb IL, USA
| | - Mary A Toner
- Department of Communication Disorders, University of Arkansas, Fayetteville AR, USA
| |
Collapse
|
38
|
Noise differentially impacts phoneme representations in the auditory and speech motor systems. Proc Natl Acad Sci U S A 2014; 111:7126-31. [PMID: 24778251 DOI: 10.1073/pnas.1318738111] [Citation(s) in RCA: 151] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Although it is well accepted that the speech motor system (SMS) is activated during speech perception, the functional role of this activation remains unclear. Here we test the hypothesis that the redundant motor activation contributes to categorical speech perception under adverse listening conditions. In this functional magnetic resonance imaging study, participants identified one of four phoneme tokens (/ba/, /ma/, /da/, or /ta/) under one of six signal-to-noise ratio (SNR) levels (-12, -9, -6, -2, 8 dB, and no noise). Univariate and multivariate pattern analyses were used to determine the role of the SMS during perception of noise-impoverished phonemes. Results revealed a negative correlation between neural activity and perceptual accuracy in the left ventral premotor cortex and Broca's area. More importantly, multivoxel patterns of activity in the left ventral premotor cortex and Broca's area exhibited effective phoneme categorization when SNR ≥ -6 dB. This is in sharp contrast with phoneme discriminability in bilateral auditory cortices and sensorimotor interface areas (e.g., left posterior superior temporal gyrus), which was reliable only when the noise was extremely weak (SNR > 8 dB). Our findings provide strong neuroimaging evidence for a greater robustness of the SMS than auditory regions for categorical speech perception in noise. Under adverse listening conditions, better discriminative activity in the SMS may compensate for loss of specificity in the auditory system via sensorimotor integration.
Collapse
|
39
|
Hickok G. Toward an Integrated Psycholinguistic, Neurolinguistic, Sensorimotor Framework for Speech Production. LANGUAGE AND COGNITIVE PROCESSES 2014; 29:52-59. [PMID: 24563567 PMCID: PMC3927912 DOI: 10.1080/01690965.2013.852907] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Affiliation(s)
- Gregory Hickok
- University of California, Department of Cognitive Sciences, Irvine, CA 92697, USA,
| |
Collapse
|
40
|
Matchin W, Groulx K, Hickok G. Audiovisual speech integration does not rely on the motor system: evidence from articulatory suppression, the McGurk effect, and fMRI. J Cogn Neurosci 2013; 26:606-20. [PMID: 24236768 DOI: 10.1162/jocn_a_00515] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Visual speech influences the perception of heard speech. A classic example of this is the McGurk effect, whereby an auditory /pa/ overlaid onto a visual /ka/ induces the fusion percept of /ta/. Recent behavioral and neuroimaging research has highlighted the importance of both articulatory representations and motor speech regions of the brain, particularly Broca's area, in audiovisual (AV) speech integration. Alternatively, AV speech integration may be accomplished by the sensory system through multisensory integration in the posterior STS. We assessed the claims regarding the involvement of the motor system in AV integration in two experiments: (i) examining the effect of articulatory suppression on the McGurk effect and (ii) determining if motor speech regions show an AV integration profile. The hypothesis regarding experiment (i) is that if the motor system plays a role in McGurk fusion, distracting the motor system through articulatory suppression should result in a reduction of McGurk fusion. The results of experiment (i) showed that articulatory suppression results in no such reduction, suggesting that the motor system is not responsible for the McGurk effect. The hypothesis of experiment (ii) was that if the brain activation to AV speech in motor regions (such as Broca's area) reflects AV integration, the profile of activity should reflect AV integration: AV > AO (auditory only) and AV > VO (visual only). The results of experiment (ii) demonstrate that motor speech regions do not show this integration profile, whereas the posterior STS does. Instead, activity in motor regions is task dependent. The combined results suggest that AV speech integration does not rely on the motor system.
Collapse
|
41
|
Wong B, Szücs D. Single-digit Arabic numbers do not automatically activate magnitude representations in adults or in children: evidence from the symbolic same-different task. Acta Psychol (Amst) 2013; 144:488-98. [PMID: 24076332 PMCID: PMC3842502 DOI: 10.1016/j.actpsy.2013.08.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2012] [Revised: 08/08/2013] [Accepted: 08/20/2013] [Indexed: 12/03/2022] Open
Abstract
We investigated whether the mere presentation of single-digit Arabic numbers activates their magnitude representations using a visually-presented symbolic same-different task for 20 adults and 15 children. Participants saw two single-digit Arabic numbers on a screen and judged whether the numbers were the same or different. We examined whether reaction time in this task was primarily driven by (objective or subjective) perceptual similarity, or by the numerical difference between the two digits. We reasoned that, if Arabic numbers automatically activate magnitude representations, a numerical function would best predict reaction time; but if Arabic numbers do not automatically activate magnitude representations, a perceptual function would best predict reaction time. Linear regressions revealed that a perceptual function, specifically, subjective visual similarity, was the best and only significant predictor of reaction time in adults and in children. These data strongly suggest that, in this task, single-digit Arabic numbers do not necessarily automatically activate magnitude representations in adults or in children. As the first study to date to explicitly study the developmental importance of perceptual factors in the symbolic same-different task, we found no significant differences between adults and children in their reliance on perceptual information in this task. Based on our findings, we propose that visual properties may play a key role in symbolic number judgements.
Collapse
Affiliation(s)
- Becky Wong
- Centre for Neuroscience in Education, Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom.
| | | |
Collapse
|
42
|
Specht K. Neuronal basis of speech comprehension. Hear Res 2013; 307:121-35. [PMID: 24113115 DOI: 10.1016/j.heares.2013.09.011] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2013] [Revised: 09/15/2013] [Accepted: 09/19/2013] [Indexed: 01/18/2023]
Abstract
Verbal communication does not rely only on the simple perception of auditory signals. It is rather a parallel and integrative processing of linguistic and non-linguistic information, involving temporal and frontal areas in particular. This review describes the inherent complexity of auditory speech comprehension from a functional-neuroanatomical perspective. The review is divided into two parts. In the first part, structural and functional asymmetry of language relevant structures will be discus. The second part of the review will discuss recent neuroimaging studies, which coherently demonstrate that speech comprehension processes rely on a hierarchical network involving the temporal, parietal, and frontal lobes. Further, the results support the dual-stream model for speech comprehension, with a dorsal stream for auditory-motor integration, and a ventral stream for extracting meaning but also the processing of sentences and narratives. Specific patterns of functional asymmetry between the left and right hemisphere can also be demonstrated. The review article concludes with a discussion on interactions between the dorsal and ventral streams, particularly the involvement of motor related areas in speech perception processes, and outlines some remaining unresolved issues. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Karsten Specht
- Department of Biological and Medical Psychology, University of Bergen, Jonas Lies vei 91, 5009 Bergen, Norway; Department for Medical Engineering, Haukeland University Hospital, Bergen, Norway.
| |
Collapse
|
43
|
Adank P, Rueschemeyer SA, Bekkering H. The role of accent imitation in sensorimotor integration during processing of intelligible speech. Front Hum Neurosci 2013; 7:634. [PMID: 24109447 PMCID: PMC3789941 DOI: 10.3389/fnhum.2013.00634] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2013] [Accepted: 09/12/2013] [Indexed: 11/13/2022] Open
Abstract
Recent theories on how listeners maintain perceptual invariance despite variation in the speech signal allocate a prominent role to imitation mechanisms. Notably, these simulation accounts propose that motor mechanisms support perception of ambiguous or noisy signals. Indeed, imitation of ambiguous signals, e.g., accented speech, has been found to aid effective speech comprehension. Here, we explored the possibility that imitation in speech benefits perception by increasing activation in speech perception and production areas. Participants rated the intelligibility of sentences spoken in an unfamiliar accent of Dutch in a functional Magnetic Resonance Imaging experiment. Next, participants in one group repeated the sentences in their own accent, while a second group vocally imitated the accent. Finally, both groups rated the intelligibility of accented sentences in a post-test. The neuroimaging results showed an interaction between type of training and pre- and post-test sessions in left Inferior Frontal Gyrus, Supplementary Motor Area, and left Superior Temporal Sulcus. Although alternative explanations such as task engagement and fatigue need to be considered as well, the results suggest that imitation may aid effective speech comprehension by supporting sensorimotor integration.
Collapse
Affiliation(s)
- Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London London, UK ; Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen Nijmegen, Netherlands
| | | | | |
Collapse
|
44
|
Golestani N, Hervais-Adelman A, Obleser J, Scott SK. Semantic versus perceptual interactions in neural processing of speech-in-noise. Neuroimage 2013; 79:52-61. [DOI: 10.1016/j.neuroimage.2013.04.049] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2012] [Revised: 04/08/2013] [Accepted: 04/16/2013] [Indexed: 10/26/2022] Open
|
45
|
Abstract
There is little doubt that predictive coding is an important mechanism in language processing-indeed, in information processing generally. However, it is less clear whether the action system is the source of such predictions during perception. Here I summarize the computational problem with motor prediction for perceptual processes and argue instead for a dual-stream model of predictive coding.
Collapse
Affiliation(s)
- Gregory Hickok
- Department of Cognitive Sciences, University of California, Irvine, CA 92697, USA.
| |
Collapse
|
46
|
Grabski K, Tremblay P, Gracco VL, Girin L, Sato M. A mediating role of the auditory dorsal pathway in selective adaptation to speech: a state-dependent transcranial magnetic stimulation study. Brain Res 2013; 1515:55-65. [PMID: 23542585 DOI: 10.1016/j.brainres.2013.03.024] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2012] [Revised: 03/17/2013] [Accepted: 03/21/2013] [Indexed: 11/30/2022]
Abstract
In addition to sensory processing, recent neurobiological models of speech perception postulate the existence of a left auditory dorsal processing stream, linking auditory speech representations in the auditory cortex with articulatory representations in the motor system, through sensorimotor interaction interfaced in the supramarginal gyrus and/or the posterior part of the superior temporal gyrus. The present state-dependent transcranial magnetic stimulation study is aimed at determining whether speech recognition is indeed mediated by the auditory dorsal pathway, by examining the causal contribution of the left ventral premotor cortex, supramarginal gyrus and posterior part of the superior temporal gyrus during an auditory syllable identification/categorization task. To this aim, participants listened to a sequence of /ba/ syllables before undergoing a two forced-choice auditory syllable decision task on ambiguous syllables (ranging in the categorical boundary between /ba/ and /da/). Consistent with previous studies on selective adaptation to speech, following adaptation to /ba/, participants responses were biased towards /da/. In contrast, in a control condition without prior auditory adaptation no such bias was observed. Crucially, compared to the results observed without stimulation, single-pulse transcranial magnetic stimulation delivered at the onset of each target stimulus interacted with the initial state of each of the stimulated brain area by enhancing the adaptation effect. These results demonstrate that the auditory dorsal pathway contribute to auditory speech adaptation.
Collapse
Affiliation(s)
- Krystyna Grabski
- GIPSA-lab, Département Parole & Cognition, CNRS & Grenoble Université, France.
| | | | | | | | | |
Collapse
|
47
|
Renzi C, Schiavi S, Carbon CC, Vecchi T, Silvanto J, Cattaneo Z. Processing of featural and configural aspects of faces is lateralized in dorsolateral prefrontal cortex: a TMS study. Neuroimage 2013; 74:45-51. [PMID: 23435211 DOI: 10.1016/j.neuroimage.2013.02.015] [Citation(s) in RCA: 54] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2012] [Revised: 01/22/2013] [Accepted: 02/02/2013] [Indexed: 11/29/2022] Open
Abstract
Facial recognition relies on distinct and parallel types of processing: featural processing focuses on the individual components of a face (e.g., the shape or the size of the eyes), whereas configural (or "relational") processing considers the spatial interrelationships among the single facial components (e.g., distance of the mouth from the nose). Previous neuroimaging evidence has suggested that featural and configural processes may rely on different brain circuits. By using rTMS, here we show for the first time a double dissociation in dorsolateral prefrontal cortex for different aspects of face processing: in particular, TMS over the left middle frontal gyrus (BA8) selectively disrupted featural processing, whereas TMS over the right inferior frontal gyrus (BA44) selectively interfered with configural processing of faces. By establishing a causal link between activation in left and right prefrontal areas and different modes of face processing, our data extend previous neuroimaging evidence and may have important implications in the study of face-processing deficits, such as those manifested in prosopagnosia and autistic spectrum disorders.
Collapse
Affiliation(s)
- Chiara Renzi
- Brain Connectivity Center, IRCCS Mondino, Pavia, Italy
| | | | | | | | | | | |
Collapse
|