1
|
Wilt H, Wu Y, Evans BG, Adank P. Automatic imitation of speech is enhanced for non-native sounds. Psychon Bull Rev 2024; 31:1114-1130. [PMID: 37848661 PMCID: PMC11192695 DOI: 10.3758/s13423-023-02394-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2023] [Indexed: 10/19/2023]
Abstract
Simulation accounts of speech perception posit that speech is covertly imitated to support perception in a top-down manner. Behaviourally, covert imitation is measured through the stimulus-response compatibility (SRC) task. In each trial of a speech SRC task, participants produce a target speech sound whilst perceiving a speech distractor that either matches the target (compatible condition) or does not (incompatible condition). The degree to which the distractor is covertly imitated is captured by the automatic imitation effect, computed as the difference in response times (RTs) between compatible and incompatible trials. Simulation accounts disagree on whether covert imitation is enhanced when speech perception is challenging or instead when the speech signal is most familiar to the speaker. To test these accounts, we conducted three experiments in which participants completed SRC tasks with native and non-native sounds. Experiment 1 uncovered larger automatic imitation effects in an SRC task with non-native sounds than with native sounds. Experiment 2 replicated the finding online, demonstrating its robustness and the applicability of speech SRC tasks online. Experiment 3 intermixed native and non-native sounds within a single SRC task to disentangle effects of perceiving non-native sounds from confounding effects of producing non-native speech actions. This last experiment confirmed that automatic imitation is enhanced for non-native speech distractors, supporting a compensatory function of covert imitation in speech perception. The experiment also uncovered a separate effect of producing non-native speech actions on enhancing automatic imitation effects.
Collapse
Affiliation(s)
- Hannah Wilt
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK.
| | - Yuchunzi Wu
- Department of Neural and Cognitive Sciences, New York University Shanghai, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Sciences at New York University Shanghai, Shanghai, China
| | - Bronwen G Evans
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Patti Adank
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| |
Collapse
|
2
|
Borland MS, Buell EP, Riley JR, Carroll AM, Moreno NA, Sharma P, Grasse KM, Buell JM, Kilgard MP, Engineer CT. Precise sound characteristics drive plasticity in the primary auditory cortex with VNS-sound pairing. Front Neurosci 2023; 17:1248936. [PMID: 37732302 PMCID: PMC10508341 DOI: 10.3389/fnins.2023.1248936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 08/22/2023] [Indexed: 09/22/2023] Open
Abstract
Introduction Repeatedly pairing a tone with vagus nerve stimulation (VNS) alters frequency tuning across the auditory pathway. Pairing VNS with speech sounds selectively enhances the primary auditory cortex response to the paired sounds. It is not yet known how altering the speech sounds paired with VNS alters responses. In this study, we test the hypothesis that the sounds that are presented and paired with VNS will influence the neural plasticity observed following VNS-sound pairing. Methods To explore the relationship between acoustic experience and neural plasticity, responses were recorded from primary auditory cortex (A1) after VNS was repeatedly paired with the speech sounds 'rad' and 'lad' or paired with only the speech sound 'rad' while 'lad' was an unpaired background sound. Results Pairing both sounds with VNS increased the response strength and neural discriminability of the paired sounds in the primary auditory cortex. Surprisingly, pairing only 'rad' with VNS did not alter A1 responses. Discussion These results suggest that the specific acoustic contrasts associated with VNS can powerfully shape neural activity in the auditory pathway. Methods to promote plasticity in the central auditory system represent a new therapeutic avenue to treat auditory processing disorders. Understanding how different sound contrasts and neural activity patterns shape plasticity could have important clinical implications.
Collapse
Affiliation(s)
- Michael S. Borland
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Elizabeth P. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Jonathan R. Riley
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Alan M. Carroll
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Nicole A. Moreno
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Pryanka Sharma
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Katelyn M. Grasse
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
- Erik Jonsson School of Engineering and Computer Science, The University of Texas at Dallas, Richardson, TX, United States
| | - John M. Buell
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Michael P. Kilgard
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| | - Crystal T. Engineer
- Department of Neuroscience, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, United States
- Texas Biomedical Device Center, The University of Texas at Dallas, Richardson, TX, United States
| |
Collapse
|
3
|
Mark JA, Ayaz H, Callan DE. Simultaneous fMRI and tDCS for Enhancing Training of Flight Tasks. Brain Sci 2023; 13:1024. [PMID: 37508957 PMCID: PMC10377527 DOI: 10.3390/brainsci13071024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 06/23/2023] [Accepted: 06/29/2023] [Indexed: 07/30/2023] Open
Abstract
There is a gap in our understanding of how best to apply transcranial direct-current stimulation (tDCS) to enhance learning in complex, realistic, and multifocus tasks such as aviation. Our goal is to assess the effects of tDCS and feedback training on task performance, brain activity, and connectivity using functional magnetic resonance imaging (fMRI). Experienced glider pilots were recruited to perform a one-day, three-run flight-simulator task involving varying difficulty conditions and a secondary auditory task, mimicking real flight requirements. The stimulation group (versus sham) received 1.5 mA high-definition HD-tDCS to the right dorsolateral prefrontal cortex (DLPFC) for 30 min during the training. Whole-brain fMRI was collected before, during, and after stimulation. Active stimulation improved piloting performance both during and post-training, particularly in novice pilots. The fMRI revealed a number of tDCS-induced effects on brain activation, including an increase in the left cerebellum and bilateral basal ganglia for the most difficult conditions, an increase in DLPFC activation and connectivity to the cerebellum during stimulation, and an inhibition in the secondary task-related auditory cortex and Broca's area. Here, we show that stimulation increases activity and connectivity in flight-related brain areas, particularly in novices, and increases the brain's ability to focus on flying and ignore distractors. These findings can guide applied neurostimulation in real pilot training to enhance skill acquisition and can be applied widely in other complex perceptual-motor real-world tasks.
Collapse
Affiliation(s)
- Jesse A Mark
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
| | - Hasan Ayaz
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA
- Department of Psychological and Brain Sciences, College of Arts and Sciences, Drexel University, Philadelphia, PA 19104, USA
- Drexel Solutions Institute, Drexel University, Philadelphia, PA 19104, USA
- A.J. Drexel Autism Institute, Drexel University, Philadelphia, PA 19104, USA
- Department of Family and Community Health, University of Pennsylvania, Philadelphia, PA 19104, USA
- Center for Injury Research and Prevention, Children's Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Daniel E Callan
- Brain Information Communication Research Laboratory, Advanced Telecommunications Research Institute International, Kyoto 619-0288, Japan
| |
Collapse
|
4
|
Elmer S, Besson M, Rodriguez-Fornells A, Giroud N. Foreign speech sound discrimination and associative word learning lead to a fast reconfiguration of resting-state networks. Neuroimage 2023; 271:120026. [PMID: 36921678 DOI: 10.1016/j.neuroimage.2023.120026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Revised: 03/09/2023] [Accepted: 03/12/2023] [Indexed: 03/18/2023] Open
Abstract
Learning new words in an unfamiliar language is a complex endeavor that requires the orchestration of multiple perceptual and cognitive functions. Although the neural mechanisms governing word learning are becoming better understood, little is known about the predictive value of resting-state (RS) metrics for foreign word discrimination and word learning attainment. In addition, it is still unknown which of the multistep processes involved in word learning have the potential to rapidly reconfigure RS networks. To address these research questions, we used electroencephalography (EEG), measured forty participants, and examined scalp-based power spectra, source-based spectral density maps and functional connectivity metrics before (RS1), in between (RS2) and after (RS3) a series of tasks which are known to facilitate the acquisition of new words in a foreign language, namely word discrimination, word-referent mapping and semantic generalization. Power spectra at the scalp level consistently revealed a reconfiguration of RS networks as a function of foreign word discrimination (RS1 vs. RS2) and word learning (RS1 vs. RS3) tasks in the delta, lower and upper alpha, and upper beta frequency ranges. Otherwise, functional reconfigurations at the source level were restricted to the theta (spectral density maps) and to the lower and upper alpha frequency bands (spectral density maps and functional connectivity). Notably, scalp RS changes related to the word discrimination tasks (difference between RS2 and RS1) correlated with word discrimination abilities (upper alpha band) and semantic generalization performance (theta and upper alpha bands), whereas functional changes related to the word learning tasks (difference between RS3 and RS1) correlated with word discrimination scores (lower alpha band). Taken together, these results highlight that foreign speech sound discrimination and word learning have the potential to rapidly reconfigure RS networks at multiple functional scales.
Collapse
Affiliation(s)
- Stefan Elmer
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Bellvitge Biomedical Research Institute, Barcelona, Spain; Competence center Language & Medicine, University of Zurich, Switzerland.
| | - Mireille Besson
- Laboratoire de Neurosciences Cognitives, Université Publique de France, CNRS & Aix-Marseille University, Marseille, France
| | - Antoni Rodriguez-Fornells
- Bellvitge Biomedical Research Institute, Barcelona, Spain; University of Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats, Barcelona, Spain
| | - Nathalie Giroud
- Department of Computational Linguistics, Computational Neuroscience of Speech & Hearing, University of Zurich, Zurich, Switzerland; Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland; Competence center Language & Medicine, University of Zurich, Switzerland
| |
Collapse
|
5
|
Bermúdez-Margaretto B, Gallo F, Novitskiy N, Myachykov A, Petrova A, Shtyrov Y. Ultra-rapid and automatic interplay between L1 and L2 semantics in late bilinguals: EEG evidence. Cortex 2022; 151:147-161. [DOI: 10.1016/j.cortex.2022.03.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Revised: 01/17/2022] [Accepted: 03/03/2022] [Indexed: 11/03/2022]
|
6
|
Xi J, Xu H, Zhu Y, Zhang L, Shu H, Zhang Y. Categorical Perception of Chinese Lexical Tones by Late Second Language Learners With High Proficiency: Behavioral and Electrophysiological Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4695-4704. [PMID: 34735263 DOI: 10.1044/2021_jslhr-20-00210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE Although acquisition of Chinese lexical tones by second language (L2) learners has been intensively investigated, very few studies focused on categorical perception (CP) of lexical tones by highly proficient L2 learners. This study was designed to address this issue with behavioral and electrophysiological measures. METHOD Behavioral identification and auditory event-related potential (ERP) components for speech discrimination, including mismatch negativity (MMN), N2b, and P3b, were measured in 23 native Korean speakers who were highly proficient late L2 learners of Chinese. For the ERP measures, both passive and active listening tasks were administered to examine the automatic and attention-controlled discriminative responses to within- and across-category differences for carefully chosen stimuli from a lexical tone continuum. RESULTS The behavioral task revealed native-like identification function of the tonal continuum. Correspondingly, the active oddball task demonstrated larger P3b amplitudes for the across-category than within-category deviants in the left recording site, indicating clear CP of lexical tones in the attentive condition. By contrast, similar MMN responses in the right recording site were elicited by both the across- and within-category deviants, indicating the absence of CP effect with automatic phonological processing of lexical tones at the pre-attentive stage even in L2 learners with high Chinese proficiency. CONCLUSION Although behavioral data showed clear evidence of categorical perception of lexical tones in proficient L2 learners, ERP measures from passive and active listening tasks demonstrated fine-grained sensitivity in terms of response polarity, latency, and laterality in revealing different aspects of auditory versus linguistic processing associated with speech decoding by means of largely implicit native language acquisition versus effortful explicit L2 learning.
Collapse
Affiliation(s)
- Jie Xi
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Hongkai Xu
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, China
| | - Ying Zhu
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, China
| | - Linjun Zhang
- Beijing Advanced Innovation Center for Language Resources and College of Advanced Chinese Training, Beijing Language and Culture University, China
| | - Hua Shu
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
7
|
Feng G, Gan Z, Yi HG, Ell SW, Roark CL, Wang S, Wong PCM, Chandrasekaran B. Neural dynamics underlying the acquisition of distinct auditory category structures. Neuroimage 2021; 244:118565. [PMID: 34543762 DOI: 10.1016/j.neuroimage.2021.118565] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2021] [Revised: 09/05/2021] [Accepted: 09/06/2021] [Indexed: 11/16/2022] Open
Abstract
Despite the multidimensional and temporally fleeting nature of auditory signals we quickly learn to assign novel sounds to behaviorally relevant categories. The neural systems underlying the learning and representation of novel auditory categories are far from understood. Current models argue for a rigid specialization of hierarchically organized core regions that are fine-tuned to extracting and mapping relevant auditory dimensions to meaningful categories. Scaffolded within a dual-learning systems approach, we test a competing hypothesis: the spatial and temporal dynamics of emerging auditory-category representations are not driven by the underlying dimensions but are constrained by category structure and learning strategies. To test these competing models, we used functional Magnetic Resonance Imaging (fMRI) to assess representational dynamics during the feedback-based acquisition of novel non-speech auditory categories with identical dimensions but differing category structures: rule-based (RB) categories, hypothesized to involve an explicit sound-to-rule mapping network, and information integration (II) based categories, involving pre-decisional integration of dimensions via a procedural-based sound-to-reward mapping network. Adults were assigned to either the RB (n = 30, 19 females) or II (n = 30, 22 females) learning tasks. Despite similar behavioral learning accuracies, learning strategies derived from computational modeling and involvements of corticostriatal systems during feedback processing differed across tasks. Spatiotemporal multivariate representational similarity analysis revealed an emerging representation within an auditory sensory-motor pathway exclusively for the II learning task, prominently involving the superior temporal gyrus (STG), inferior frontal gyrus (IFG), and posterior precentral gyrus. In contrast, the RB learning task yielded distributed neural representations within regions involved in cognitive-control and attentional processes that emerged at different time points of learning. Our results unequivocally demonstrate that auditory learners' neural systems are highly flexible and show distinct spatial and temporal patterns that are not dimension-specific but reflect underlying category structures and learning strategies.
Collapse
Affiliation(s)
- Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.
| | - Zhenzhong Gan
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China, School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Han Gyol Yi
- Department of Neurological Surgery, University of California, San Francisco, CA 94158, United States
| | - Shawn W Ell
- Department of Psychology, Graduate School of Biomedical Sciences and Engineering, University of Maine, 5742 Little Hall, Room 301, Orono, ME 04469-5742, United States
| | - Casey L Roark
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, United States; Center for the Neural Basis of Cognition, Pittsburgh, PA 15232, United States
| | - Suiping Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, China, School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, United States; Center for the Neural Basis of Cognition, Pittsburgh, PA 15232, United States.
| |
Collapse
|
8
|
Learning nonnative speech sounds changes local encoding in the adult human cortex. Proc Natl Acad Sci U S A 2021; 118:2101777118. [PMID: 34475209 DOI: 10.1073/pnas.2101777118] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 07/12/2021] [Indexed: 11/18/2022] Open
Abstract
Adults can learn to identify nonnative speech sounds with training, albeit with substantial variability in learning behavior. Increases in behavioral accuracy are associated with increased separability for sound representations in cortical speech areas. However, it remains unclear whether individual auditory neural populations all show the same types of changes with learning, or whether there are heterogeneous encoding patterns. Here, we used high-resolution direct neural recordings to examine local population response patterns, while native English listeners learned to recognize unfamiliar vocal pitch patterns in Mandarin Chinese tones. We found a distributed set of neural populations in bilateral superior temporal gyrus and ventrolateral frontal cortex, where the encoding of Mandarin tones changed throughout training as a function of trial-by-trial accuracy ("learning effect"), including both increases and decreases in the separability of tones. These populations were distinct from populations that showed changes as a function of exposure to the stimuli regardless of trial-by-trial accuracy. These learning effects were driven in part by more variable neural responses to repeated presentations of acoustically identical stimuli. Finally, learning effects could be predicted from speech-evoked activity even before training, suggesting that intrinsic properties of these populations make them amenable to behavior-related changes. Together, these results demonstrate that nonnative speech sound learning involves a wide array of changes in neural representations across a distributed set of brain regions.
Collapse
|
9
|
Kemmerer D. What modulates the Mirror Neuron System during action observation?: Multiple factors involving the action, the actor, the observer, the relationship between actor and observer, and the context. Prog Neurobiol 2021; 205:102128. [PMID: 34343630 DOI: 10.1016/j.pneurobio.2021.102128] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 06/23/2021] [Accepted: 07/29/2021] [Indexed: 01/03/2023]
Abstract
Seeing an agent perform an action typically triggers a motor simulation of that action in the observer's Mirror Neuron System (MNS). Over the past few years, it has become increasingly clear that during action observation the patterns and strengths of responses in the MNS are modulated by multiple factors. The first aim of this paper is therefore to provide the most comprehensive survey to date of these factors. To that end, 22 distinct factors are described, broken down into the following sets: six involving the action; two involving the actor; nine involving the observer; four involving the relationship between actor and observer; and one involving the context. The second aim is to consider the implications of these findings for four prominent theoretical models of the MNS: the Direct Matching Model; the Predictive Coding Model; the Value-Driven Model; and the Associative Model. These assessments suggest that although each model is supported by a wide range of findings, each one is also challenged by other findings and relatively unaffected by still others. Hence, there is now a pressing need for a richer, more inclusive model that is better able to account for all of the modulatory factors that have been identified so far.
Collapse
Affiliation(s)
- David Kemmerer
- Department of Psychological Sciences, Department of Speech, Language, and Hearing Sciences, Lyles-Porter Hall, Purdue University, 715 Clinic Drive, United States.
| |
Collapse
|
10
|
Bosseler AN, Clarke M, Tavabi K, Larson ED, Hippe DS, Taulu S, Kuhl PK. Using magnetoencephalography to examine word recognition, lateralization, and future language skills in 14-month-old infants. Dev Cogn Neurosci 2020; 47:100901. [PMID: 33360832 PMCID: PMC7773883 DOI: 10.1016/j.dcn.2020.100901] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 12/08/2020] [Accepted: 12/16/2020] [Indexed: 11/15/2022] Open
Abstract
Word learning is a significant milestone in language acquisition. The second year of life marks a period of dramatic advances in infants' expressive and receptive word-processing abilities. Studies show that in adulthood, language processing is left-hemisphere dominant. However, adults learning a second language activate right-hemisphere brain functions. In infancy, acquisition of a first language involves recruitment of bilateral brain networks, and strong left-hemisphere dominance emerges by the third year. In the current study we focus on 14-month-old infants in the earliest stages of word learning using infant magnetoencephalography (MEG) brain imagining to characterize neural activity in response to familiar and unfamiliar words. Specifically, we examine the relationship between right-hemisphere brain responses and prospective measures of vocabulary growth. As expected, MEG source modeling revealed a broadly distributed network in frontal, temporal and parietal cortex that distinguished word classes between 150-900 ms after word onset. Importantly, brain activity in the right frontal cortex in response to familiar words was highly correlated with vocabulary growth at 18, 21, 24, and 27 months. Specifically, higher activation to familiar words in the 150-300 ms interval was associated with faster vocabulary growth, reflecting processing efficiency, whereas higher activation to familiar words in the 600-900 ms interval was associated with slower vocabulary growth, reflecting cognitive effort. These findings inform research and theory on the involvement of right frontal cortex in specific cognitive processes and individual differences related to attention that may play an important role in the development of left-lateralized word processing.
Collapse
Affiliation(s)
- Alexis N Bosseler
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA.
| | - Maggie Clarke
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA
| | - Kambiz Tavabi
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA
| | - Eric D Larson
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA
| | - Daniel S Hippe
- Department of Radiology, University of Washington, Box 354755, Seattle, WA, 98195, USA
| | - Samu Taulu
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA; Department of Physics, University of Washington, Box 351560, Seattle, WA, 98195, USA
| | - Patricia K Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Box 357988, Seattle, WA, 98195, USA; Department of Speech and Hearing Sciences, University of Washington, Box 354875, Seattle, WA, 98195, USA
| |
Collapse
|
11
|
Feng G, Yi HG, Chandrasekaran B. The Role of the Human Auditory Corticostriatal Network in Speech Learning. Cereb Cortex 2020; 29:4077-4089. [PMID: 30535138 DOI: 10.1093/cercor/bhy289] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2018] [Revised: 08/30/2018] [Indexed: 01/26/2023] Open
Abstract
We establish a mechanistic account of how the mature human brain functionally reorganizes to acquire and represent new speech sounds. Native speakers of English learned to categorize Mandarin lexical tone categories produced by multiple talkers using trial-by-trial feedback. We hypothesized that the corticostriatal system is a key intermediary in mediating temporal lobe plasticity and the acquisition of new speech categories in adulthood. We conducted a functional magnetic resonance imaging experiment in which participants underwent a sound-to-category mapping task. Diffusion tensor imaging data were collected, and probabilistic fiber tracking analysis was employed to assay the auditory corticostriatal pathways. Multivariate pattern analysis showed that talker-invariant novel tone category representations emerged in the left superior temporal gyrus (LSTG) within a few hundred training trials. Univariate analysis showed that the putamen, a subregion of the striatum, was sensitive to positive feedback in correctly categorized trials. With learning, functional coupling between the putamen and LSTG increased during error processing. Furthermore, fiber tractography demonstrated robust structural connectivity between the feedback-sensitive striatal regions and the LSTG regions that represent the newly learned tone categories. Our convergent findings highlight a critical role for the auditory corticostriatal circuitry in mediating the acquisition of new speech categories.
Collapse
Affiliation(s)
- Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Hong Kong SAR, China.,Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Han Gyol Yi
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA 94158, USA
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, USA
| |
Collapse
|
12
|
Saltzman DI, Myers EB. Neural Representation of Articulable and Inarticulable Novel Sound Contrasts: The Role of the Dorsal Stream. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:339-364. [PMID: 35784619 PMCID: PMC9248853 DOI: 10.1162/nol_a_00016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Accepted: 05/23/2020] [Indexed: 06/15/2023]
Abstract
The extent that articulatory information embedded in incoming speech contributes to the formation of new perceptual categories for speech sounds has been a matter of discourse for decades. It has been theorized that the acquisition of new speech sound categories requires a network of sensory and speech motor cortical areas (the "dorsal stream") to successfully integrate auditory and articulatory information. However, it is possible that these brain regions are not sensitive specifically to articulatory information, but instead are sensitive to the abstract phonological categories being learned. We tested this hypothesis by training participants over the course of several days on an articulable non-native speech contrast and acoustically matched inarticulable nonspeech analogues. After reaching comparable levels of proficiency with the two sets of stimuli, activation was measured in fMRI as participants passively listened to both sound types. Decoding of category membership for the articulable speech contrast alone revealed a series of left and right hemisphere regions outside of the dorsal stream that have previously been implicated in the emergence of non-native speech sound categories, while no regions could successfully decode the inarticulable nonspeech contrast. Although activation patterns in the left inferior frontal gyrus, the middle temporal gyrus, and the supplementary motor area provided better information for decoding articulable (speech) sounds compared to the inarticulable (sine wave) sounds, the finding that dorsal stream regions do not emerge as good decoders of the articulable contrast alone suggests that other factors, including the strength and structure of the emerging speech categories are more likely drivers of dorsal stream activation for novel sound learning.
Collapse
|
13
|
Radüntz T. The Effect of Planning, Strategy Learning, and Working Memory Capacity on Mental Workload. Sci Rep 2020; 10:7096. [PMID: 32341379 PMCID: PMC7184608 DOI: 10.1038/s41598-020-63897-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 04/07/2020] [Indexed: 11/09/2022] Open
Abstract
In our modern society, planning and problem solving are crucial for handling a wide range of situations. Investigation of the experienced mental workload connected to planning, strategy learning, and working memory capacity is of particular interest for adjusting conditions according to the mental state of the individual. In our study, we examined 21 subjects during a planning and a working memory task. We applied the method of Dual Frequency Head Maps (DFHM) from the electroencephalogram for capturing mental workload objectively. We evaluated the DFHM-workload index and performance data during the learning and main phase of the planning task and linked the results to subjects' working memory capacity. The DFHM-workload index indicated that subjects with higher working memory capacity experienced a gradual decrease in mental workload during strategy learning of the planning task. However, the effect of learning on mental workload disappeared during the main phase.
Collapse
Affiliation(s)
- Thea Radüntz
- Federal Institute for Occupational Safety and Health, Work and Health, Mental Health and Cognitive Capacity, Berlin, 10317, Germany.
| |
Collapse
|
14
|
Structural brain changes as a function of second language vocabulary training: Effects of learning context. Brain Cogn 2019; 134:90-102. [DOI: 10.1016/j.bandc.2018.09.004] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2017] [Revised: 06/23/2018] [Accepted: 09/13/2018] [Indexed: 11/19/2022]
|
15
|
Cheng B, Zhang X, Fan S, Zhang Y. The Role of Temporal Acoustic Exaggeration in High Variability Phonetic Training: A Behavioral and ERP Study. Front Psychol 2019; 10:1178. [PMID: 31178795 PMCID: PMC6543854 DOI: 10.3389/fpsyg.2019.01178] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Accepted: 05/06/2019] [Indexed: 12/03/2022] Open
Abstract
High variability phonetic training (HVPT) has been found to be effective in helping adult learners acquire non-native phonetic contrasts. The present study investigated the role of temporal acoustic exaggeration by comparing the canonical HVPT paradigm without involving acoustic exaggeration with a modified adaptive HVPT paradigm that integrated key temporal exaggerations in infant-directed speech (IDS). Sixty native Chinese adults participated in the training of the English /i/ and /i/ vowel contrast and were randomly assigned to three subject groups. Twenty were trained with the typical HVPT paradigm (the HVPT group), twenty were trained under the modified adaptive approach with acoustic exaggeration (the HVPT-E group), and twenty were in the control group. Behavioral tasks for the pre- and post- tests used natural word identification, synthetic stimuli identification, and synthetic stimuli discrimination. Mismatch negativity (MMN) responses from the HVPT-E group were also obtained to assess the training effects in within- and across- category discrimination without requiring focused attention. Like previous studies, significant generalization effects to new talkers were found in both the HVPT group and the HVPT-E group. The HVPT-E group, by contrast, showed greater improvement as reflected in larger progress in natural word identification performance. Furthermore, the HVPT-E group exhibited more native-like categorical perception based on spectral cues after training, together with corresponding training-induced changes in the MMN responses to within- and across- category differences. These data provide the initial evidence supporting the important role of temporal acoustic exaggeration with adaptive training in facilitating phonetic learning and promoting brain plasticity at the perceptual and pre-attentive neural levels.
Collapse
Affiliation(s)
- Bing Cheng
- English Department & Language and Cognitive Neuroscience Lab, School of Foreign Studies, Xi’an Jiaotong University, Xi’an, China
| | - Xiaojuan Zhang
- English Department & Language and Cognitive Neuroscience Lab, School of Foreign Studies, Xi’an Jiaotong University, Xi’an, China
| | - Siying Fan
- English Department & Language and Cognitive Neuroscience Lab, School of Foreign Studies, Xi’an Jiaotong University, Xi’an, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, Center for Neurobehavioral Development, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
16
|
Schmitz J, Bartoli E, Maffongelli L, Fadiga L, Sebastian-Galles N, D’Ausilio A. Motor cortex compensates for lack of sensory and motor experience during auditory speech perception. Neuropsychologia 2019; 128:290-296. [DOI: 10.1016/j.neuropsychologia.2018.01.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2017] [Revised: 12/18/2017] [Accepted: 01/05/2018] [Indexed: 10/18/2022]
|
17
|
Qi Z, Han M, Wang Y, de los Angeles C, Liu Q, Garel K, Chen ES, Whitfield-Gabrieli S, Gabrieli JD, Perrachione TK. Speech processing and plasticity in the right hemisphere predict variation in adult foreign language learning. Neuroimage 2019; 192:76-87. [DOI: 10.1016/j.neuroimage.2019.03.008] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2018] [Revised: 02/20/2019] [Accepted: 03/04/2019] [Indexed: 02/04/2023] Open
|
18
|
Abstract
Humans are born as “universal listeners.” However, over the first year, infants’ perception is shaped by native speech categories. How do these categories naturally emerge without explicit training or overt feedback? Using fMRI, we examined the neural basis of incidental sound category learning as participants played a videogame in which sound category exemplars had functional utility in guiding videogame success. Even without explicit categorization of the sounds, participants learned functionally relevant sound categories that generalized to novel exemplars when exemplars had an organized distributional structure. Critically, the striatum was engaged and functionally connected to the auditory cortex during game play, and this activity and connectivity predicted the learning outcome. These findings elucidate the neural mechanism by which humans incidentally learn “real-world” categories. Humans are born as “universal listeners” without a bias toward any particular language. However, over the first year of life, infants’ perception is shaped by learning native speech categories. Acoustically different sounds—such as the same word produced by different speakers—come to be treated as functionally equivalent. In natural environments, these categories often emerge incidentally without overt categorization or explicit feedback. However, the neural substrates of category learning have been investigated almost exclusively using overt categorization tasks with explicit feedback about categorization decisions. Here, we examined whether the striatum, previously implicated in category learning, contributes to incidental acquisition of sound categories. In the fMRI scanner, participants played a videogame in which sound category exemplars aligned with game actions and events, allowing sound categories to incidentally support successful game play. An experimental group heard nonspeech sound exemplars drawn from coherent category spaces, whereas a control group heard acoustically similar sounds drawn from a less structured space. Although the groups exhibited similar in-game performance, generalization of sound category learning and activation of the posterior striatum were significantly greater in the experimental than control group. Moreover, the experimental group showed brain–behavior relationships related to the generalization of all categories, while in the control group these relationships were restricted to the categories with structured sound distributions. Together, these results demonstrate that the striatum, through its interactions with the left superior temporal sulcus, contributes to incidental acquisition of sound category representations emerging from naturalistic learning environments.
Collapse
|
19
|
Capacities and neural mechanisms for auditory statistical learning across species. Hear Res 2019; 376:97-110. [PMID: 30797628 DOI: 10.1016/j.heares.2019.02.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Revised: 01/09/2019] [Accepted: 02/06/2019] [Indexed: 11/22/2022]
Abstract
Statistical learning has been proposed as a possible mechanism by which individuals can become sensitive to the structures of language fundamental for speech perception. Since its description in human infants, statistical learning has been described in human adults and several non-human species as a general process by which animals learn about stimulus-relevant statistics. The neurobiology of statistical learning is beginning to be understood, but many questions remain about the underlying mechanisms. Why is the developing brain particularly sensitive to stimulus and environmental statistics, and what neural processes are engaged in the adult brain to enable learning from statistical regularities in the absence of external reward or instruction? This review will survey the statistical learning abilities of humans and non-human animals with a particular focus on communicative vocalizations. We discuss the neurobiological basis of statistical learning, and specifically what can be learned by exploring this process in both humans and laboratory animals. Finally, we describe advantages of studying vocal communication in rodents as a means to further our understanding of the cortical plasticity mechanisms engaged during statistical learning. We examine the use of rodents in the context of pup retrieval, which is an auditory-based and experience-dependent form of maternal behavior.
Collapse
|
20
|
Klaus A, Lametti DR, Shiller DM, McAllister T. Can perceptual training alter the effect of visual biofeedback in speech-motor learning? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:805. [PMID: 30823822 PMCID: PMC6374144 DOI: 10.1121/1.5089218] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/11/2018] [Revised: 01/02/2019] [Accepted: 01/15/2019] [Indexed: 06/09/2023]
Abstract
Recent work showing that a period of perceptual training can modulate the magnitude of speech-motor learning in a perturbed auditory feedback task could inform clinical interventions or second-language training strategies. The present study investigated the influence of perceptual training on a clinically and pedagogically relevant task of vocally matching a visually presented speech target using visual-acoustic biofeedback. Forty female adults aged 18-35 yr received perceptual training targeting the English /æ-ɛ/ contrast, randomly assigned to a condition that shifted the perceptual boundary toward either /æ/ or /ɛ/. Participants were then asked to produce the word head while modifying their output to match a visually presented acoustic target corresponding with a slightly higher first formant (F1, closer to /æ/). By analogy to findings from previous research, it was predicted that individuals whose boundary was shifted toward /æ/ would also show a greater magnitude of change in the visual biofeedback task. After perceptual training, the groups showed the predicted difference in perceptual boundary location, but they did not differ in their performance on the biofeedback matching task. It is proposed that the explicit versus implicit nature of the tasks used might account for the difference between this study and previous findings.
Collapse
Affiliation(s)
- Adam Klaus
- Gallatin School of Individualized Study, New York University, 1 Washington Place, New York, New York 10003, USA
| | - Daniel R Lametti
- Department of Psychology, Acadia University, Horton Hall, 18 University Avenue, Wolfville, Nova Scotia, B4P 2R6, Canada
| | - Douglas M Shiller
- École d'orthophonie et d'audiologie, Université de Montréal, C.P. 6128, succursale Centre-ville, Montreal, Quebec, H3C 3J7, Canada
| | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, 665 Broadway, Suite 900, New York, New York 10012, USA
| |
Collapse
|
21
|
Leong CXR, Price JM, Pitchford NJ, van Heuven WJB. High variability phonetic training in adaptive adverse conditions is rapid, effective, and sustained. PLoS One 2018; 13:e0204888. [PMID: 30300372 PMCID: PMC6177151 DOI: 10.1371/journal.pone.0204888] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2017] [Accepted: 09/17/2018] [Indexed: 11/18/2022] Open
Abstract
This paper evaluates a novel high variability phonetic training paradigm that involves presenting spoken words in adverse conditions. The effectiveness, generalizability, and longevity of this high variability phonetic training in adverse conditions was evaluated using English phoneme contrasts in three experiments with Malaysian multilinguals. Adverse conditions were created by presenting spoken words against background multi-talker babble. In Experiment 1, the adverse condition level was set at a fixed level throughout the training and in Experiment 2 the adverse condition level was determined for each participant before training using an adaptive staircase procedure. To explore the effectiveness and sustainability of the training, phonemic discrimination ability was assessed before and immediately after training (Experiments 1 and 2) and 6 months after training (Experiment 3). Generalization of training was evaluated within and across phonemic contrasts using trained and untrained stimuli. Results revealed significant perceptual improvements after just three 20-minute training sessions and these improvements were maintained after 6 months. The training benefits also generalized from trained to untrained stimuli. Crucially, perceptual improvements were significantly larger when the adverse conditions were adapted before each training session than when it was set at a fixed level. As the training improvements observed here are markedly larger than those reported in the literature, this indicates that the individualized phonetic training regime in adaptive adverse conditions (HVPT-AAC) is highly effective at improving speech perception.
Collapse
Affiliation(s)
| | - Jessica M. Price
- School of Psychology, University of Nottingham Malaysia Campus, Semenyih, Selangor, Malaysia
| | | | | |
Collapse
|
22
|
Oliver M, Carreiras M, Paz-Alonso PM. Functional Dynamics of Dorsal and Ventral Reading Networks in Bilinguals. Cereb Cortex 2018; 27:5431-5443. [PMID: 28122808 DOI: 10.1093/cercor/bhw310] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2015] [Accepted: 09/19/2016] [Indexed: 11/13/2022] Open
Abstract
In today's world, bilingualism is increasingly common. However, it is still unclear how left-lateralized dorsal and ventral reading networks are tuned to reading in proficient second-language learners. Here, we investigated differences in functional regional activation and connectivity as a function of L1 and L2 reading, L2 orthographic depth, and task demands. Thirty-seven late bilinguals with the same L1 and either an opaque or transparent L2 performed perceptual and semantic reading tasks in L1 and L2 during functional magnetic resonance imaging (fMRI) scanning. Results revealed stronger regional recruitment for L2 versus L1 reading and stronger connectivity within the dorsal stream during L1 versus L2 reading. Differences in orthographic depth were associated with a segregated profile of left ventral occipitotemporal (vOT) coactivation with dorsal regions for the transparent L2 group and with ventral regions for the opaque L2 group. Finally, semantic versus perceptual demands modulated left vOT engagement, supporting the interactive account of the contribution of vOT to reading, and were associated with stronger coactivation within the ventral network. Our findings support a division of labor between ventral and dorsal reading networks, elucidating the critical role of the language used to read, L2 orthographic depth, and task demands on the functional dynamics of bilingual reading.
Collapse
Affiliation(s)
- Myriam Oliver
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastian, 2009 Gipuzkoa, Spain
| | - Manuel Carreiras
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastian, 2009 Gipuzkoa, Spain.,IKERBASQUE, Basque Foundation for Science, Bilbao, 48013 Bizkaia, Spain.,Department of Basque Language and Communication, EHU/UPV, Bilbao, 48940 Bizkaia, Spain
| | - Pedro M Paz-Alonso
- BCBL, Basque Center on Cognition Brain and Language, Donostia-San Sebastian, 2009 Gipuzkoa, Spain
| |
Collapse
|
23
|
Abstract
Although the parietal lobe was considered by many of the earliest investigators of disordered language to be a major component of the neural systems instantiating language, most views of the anatomic substrate of language emphasize the role of temporal and frontal lobes in language processing. We review evidence from lesion studies as well as functional neuroimaging, demonstrating that the left parietal lobe is also crucial for several aspects of language. First, we argue that the parietal lobe plays a major role in semantic processing, particularly for "thematic" relationships in which information from multiple sensory and motor domains is integrated. Additionally, we review a number of accounts that emphasize the role of the left parietal lobe in phonologic processing. Although the accounts differ somewhat with respect to the nature of the linguistic computations subserved by the parietal lobe, they share the view that the parietal lobe is essential for the processes by which sound-based representations are transcoded into a format that can drive action systems. We suggest that investigations of the linguistic capacities of the parietal lobe constrained by the understanding of the parietal lobe in action and multimodal sensory integration may serve to enhance not only our understanding of language, but also the relationship between language and more basic brain functions.
Collapse
Affiliation(s)
- H Branch Coslett
- Department of Neurology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, United States.
| | - Myrna F Schwartz
- Moss Rehabilitation Research Institute, Elkins Park, PA, United States
| |
Collapse
|
24
|
Cerebellar tDCS Modulates Neural Circuits during Semantic Prediction: A Combined tDCS-fMRI Study. J Neurosci 2017; 37:1604-1613. [PMID: 28069925 DOI: 10.1523/jneurosci.2818-16.2017] [Citation(s) in RCA: 78] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2016] [Revised: 12/22/2016] [Accepted: 01/03/2017] [Indexed: 01/09/2023] Open
Abstract
It has been proposed that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. In language, semantic prediction speeds speech production and comprehension. Right cerebellar lobules VI and VII (including Crus I/II) are engaged during a variety of language processes and are functionally connected with cerebral cortical language networks. Further, right posterolateral cerebellar neuromodulation modifies behavior during predictive language processing. These data are consistent with a role for the cerebellum in semantic processing and semantic prediction. We combined transcranial direct current stimulation (tDCS) and fMRI to assess the behavioral and neural consequences of cerebellar tDCS during a sentence completion task. Task-based and resting-state fMRI data were acquired in healthy human adults (n = 32; μ = 23.1 years) both before and after 20 min of 1.5 mA anodal (n = 18) or sham (n = 14) tDCS applied to the right posterolateral cerebellum. In the sentence completion task, the first four words of the sentence modulated the predictability of the final target word. In some sentences, the preceding context strongly predicted the target word, whereas other sentences were nonpredictive. Completion of predictive sentences increased activation in right Crus I/II of the cerebellum. Relative to sham tDCS, anodal tDCS increased activation in right Crus I/II during semantic prediction and enhanced resting-state functional connectivity between hubs of the reading/language networks. These results are consistent with a role for the right posterolateral cerebellum beyond motor aspects of language, and suggest that cerebellar internal models of linguistic stimuli support semantic prediction.SIGNIFICANCE STATEMENT Cerebellar involvement in language tasks and language networks is now well established, yet the specific cerebellar contribution to language processing remains unclear. It is thought that the cerebellum acquires internal models of mental processes that enable prediction, allowing for the optimization of behavior. Here we combined neuroimaging and neuromodulation to provide evidence that the cerebellum is specifically involved in semantic prediction during sentence processing. We found that activation within right Crus I/II was enhanced when semantic predictions were made, and we show that modulation of this region with transcranial direct current stimulation alters both activation patterns and functional connectivity within whole-brain language networks. For the first time, these data show that cerebellar neuromodulation impacts activation patterns specifically during predictive language processing.
Collapse
|
25
|
Nichols ES, Joanisse MF. Functional activity and white matter microstructure reveal the independent effects of age of acquisition and proficiency on second-language learning. Neuroimage 2016; 143:15-25. [DOI: 10.1016/j.neuroimage.2016.08.053] [Citation(s) in RCA: 43] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2016] [Revised: 08/23/2016] [Accepted: 08/24/2016] [Indexed: 10/21/2022] Open
|
26
|
Lametti DR, Oostwoud Wijdenes L, Bonaiuto J, Bestmann S, Rothwell JC. Cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change in speech. J Neurophysiol 2016; 116:2023-2032. [PMID: 27489368 PMCID: PMC5102311 DOI: 10.1152/jn.00433.2016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2016] [Accepted: 08/01/2016] [Indexed: 11/22/2022] Open
Abstract
Neuroimaging studies suggest that the cerebellum might play a role in both speech perception and speech perceptual learning. However, it remains unclear what this role is: does the cerebellum help shape the perceptual decision, or does it contribute to the timing of perceptual decisions? To test this, we used transcranial direct current stimulation (tDCS) in combination with a speech perception task. Participants experienced a series of speech perceptual tests designed to measure and then manipulate (via training) their perception of a phonetic contrast. One group received cerebellar tDCS during speech perceptual learning, and a different group received sham tDCS during the same task. Both groups showed similar learning-related changes in speech perception that transferred to a different phonetic contrast. For both trained and untrained speech perceptual decisions, cerebellar tDCS significantly increased the time it took participants to indicate their decisions with a keyboard press. By analyzing perceptual responses made by both hands, we present evidence that cerebellar tDCS disrupted the timing of perceptual decisions, while leaving the eventual decision unaltered. In support of this conclusion, we use the drift diffusion model to decompose the data into processes that determine the outcome of perceptual decision-making and those that do not. The modeling suggests that cerebellar tDCS disrupted processes unrelated to decision-making. Taken together, the empirical data and modeling demonstrate that right cerebellar tDCS dissociates the timing of perceptual decisions from perceptual change. The results provide initial evidence in healthy humans that the cerebellum critically contributes to speech timing in the perceptual domain.
Collapse
Affiliation(s)
- Daniel R Lametti
- Department of Experimental Psychology, The University of Oxford, Oxford, United Kingdom;
- Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London, United Kingdom
| | | | - James Bonaiuto
- Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London, United Kingdom
| | - Sven Bestmann
- Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London, United Kingdom
| | - John C Rothwell
- Sobell Department of Motor Neuroscience and Movement Disorders, UCL Institute of Neurology, University College London, London, United Kingdom
| |
Collapse
|
27
|
Kuhl PK, Stevenson J, Corrigan NM, van den Bosch JJF, Can DD, Richards T. Neuroimaging of the bilingual brain: Structural brain correlates of listening and speaking in a second language. BRAIN AND LANGUAGE 2016; 162:1-9. [PMID: 27490686 DOI: 10.1016/j.bandl.2016.07.004] [Citation(s) in RCA: 42] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2016] [Revised: 07/12/2016] [Accepted: 07/12/2016] [Indexed: 06/06/2023]
Abstract
Diffusion tensor imaging was used to compare white matter structure between American monolingual and Spanish-English bilingual adults living in the United States. In the bilingual group, relationships between white matter structure and naturalistic immersive experience in listening to and speaking English were additionally explored. White matter structural differences between groups were found to be bilateral and widespread. In the bilingual group, experience in listening to English was more robustly correlated with decreases in radial and mean diffusivity in anterior white matter regions of the left hemisphere, whereas experience in speaking English was more robustly correlated with increases in fractional anisotropy in more posterior left hemisphere white matter regions. The findings suggest that (a) foreign language immersion induces neuroplasticity in the adult brain, (b) the degree of alteration is proportional to language experience, and (c) the modes of immersive language experience have more robust effects on different brain regions and on different structural features.
Collapse
Affiliation(s)
- Patricia K Kuhl
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA.
| | - Jeff Stevenson
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA.
| | - Neva M Corrigan
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA; Department of Radiology, University of Washington, Seattle, WA 98195, USA.
| | | | - Dilara Deniz Can
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA.
| | - Todd Richards
- Institute for Learning & Brain Sciences, University of Washington, Seattle, WA 98195, USA; Department of Radiology, University of Washington, Seattle, WA 98195, USA.
| |
Collapse
|
28
|
Individual language experience modulates rapid formation of cortical memory circuits for novel words. Sci Rep 2016; 6:30227. [PMID: 27444206 PMCID: PMC4957205 DOI: 10.1038/srep30227] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2016] [Accepted: 07/01/2016] [Indexed: 11/08/2022] Open
Abstract
Mastering multiple languages is an increasingly important ability in the modern world; furthermore, multilingualism may affect human learning abilities. Here, we test how the brain's capacity to rapidly form new representations for spoken words is affected by prior individual experience in non-native language acquisition. Formation of new word memory traces is reflected in a neurophysiological response increase during a short exposure to novel lexicon. Therefore, we recorded changes in electrophysiological responses to phonologically native and non-native novel word-forms during a perceptual learning session, in which novel stimuli were repetitively presented to healthy adults in either ignore or attend conditions. We found that larger number of previously acquired languages and earlier average age of acquisition (AoA) predicted greater response increase to novel non-native word-forms. This suggests that early and extensive language experience is associated with greater neural flexibility for acquiring novel words with unfamiliar phonology. Conversely, later AoA was associated with a stronger response increase for phonologically native novel word-forms, indicating better tuning of neural linguistic circuits to native phonology. The results suggest that individual language experience has a strong effect on the neural mechanisms of word learning, and that it interacts with the phonological familiarity of the novel lexicon.
Collapse
|
29
|
Hertrich I, Dietrich S, Ackermann H. The role of the supplementary motor area for speech and language processing. Neurosci Biobehav Rev 2016; 68:602-610. [PMID: 27343998 DOI: 10.1016/j.neubiorev.2016.06.030] [Citation(s) in RCA: 155] [Impact Index Per Article: 19.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2015] [Revised: 06/17/2016] [Accepted: 06/21/2016] [Indexed: 01/23/2023]
Abstract
Apart from its function in speech motor control, the supplementary motor area (SMA) has largely been neglected in models of speech and language processing in the brain. The aim of this review paper is to summarize more recent work, suggesting that the SMA has various superordinate control functions during speech communication and language reception, which is particularly relevant in case of increased task demands. The SMA is subdivided into a posterior region serving predominantly motor-related functions (SMA proper) whereas the anterior part (pre-SMA) is involved in higher-order cognitive control mechanisms. In analogy to motor triggering functions of the SMA proper, the pre-SMA seems to manage procedural aspects of cognitive processing. These latter functions, among others, comprise attentional switching, ambiguity resolution, context integration, and coordination between procedural and declarative memory structures. Regarding language processing, this refers, for example, to the use of inner speech mechanisms during language encoding, but also to lexical disambiguation, syntax and prosody integration, and context-tracking.
Collapse
Affiliation(s)
- Ingo Hertrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany.
| | - Susanne Dietrich
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| | - Hermann Ackermann
- Department of Neurology and Stroke, Hertie Institute for Clinical Brain Research, University of Tübingen, Germany
| |
Collapse
|
30
|
Archila-Suerte P, Bunta F, Hernandez AE. Speech sound learning depends on individuals' ability, not just experience. THE INTERNATIONAL JOURNAL OF BILINGUALISM : CROSS-DISCIPLINARY, CROSS-LINGUISTIC STUDIES OF LANGUAGE BEHAVIOR 2016; 20:231-253. [PMID: 30381786 PMCID: PMC6205517 DOI: 10.1177/1367006914552206] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
AIMS The goal of this study was to investigate if phonetic experience with two languages facilitated the learning of novel speech sounds or if general perceptual abilities independent of bilingualism played a role in this learning. METHOD The underlying neural mechanisms involved in novel speech sound learning were observed in groups of English monolinguals (n = 20), early Spanish-English bilinguals (n = 24), and experimentally derived subgroups of individuals with advanced ability to learn novel speech sound contrasts (ALs, n = 28) and individuals with non-advanced ability to learn novel speech sound contrasts (non-ALs, n = 16). Subjects participated in four consecutive sessions of phonetic training in which they listened to novel speech sounds embedded in Hungarian pseudowords. Participants completed two fMRI sessions, one before training and another one after training. While in the scanner, participants passively listened to the speech stimuli presented during training. A repeated measures behavioral analysis and ANOVA for fMRI data were conducted to investigate learning after training. RESULTS AND CONCLUSIONS The results showed that bilinguals did not significantly differ from monolinguals in the learning of novel sounds behaviorally. Instead, the behavioral results revealed that regardless of language group (monolingual or bilingual), ALs were better at discriminating pseudowords throughout the training than non-ALs. Neurally, region of interest (ROI) analysis showed increased activity in the superior temporal gyrus (STG) bilaterally in ALs relative to non-ALs after training. Bilinguals also showed greater STG activity than monolinguals. Extracted values from ROIs entered into a 2×2 MANOVA showed a main effect of performance, demonstrating that individual ability exerts a significant effect on learning novel speech sounds. In fact, advanced ability to learn novel speech sound contrasts appears to play a more significant role in speech sound learning than experience with two phonological systems.
Collapse
Affiliation(s)
| | - Ferenc Bunta
- Department of Communication Sciences and Disorders, University of Houston, USA
| | | |
Collapse
|
31
|
Abstract
Dual-system models of visual category learning posit the existence of an explicit, hypothesis-testing reflective system, as well as an implicit, procedural-based reflexive system. The reflective and reflexive learning systems are competitive and neurally dissociable. Relatively little is known about the role of these domain-general learning systems in speech category learning. Given the multidimensional, redundant, and variable nature of acoustic cues in speech categories, our working hypothesis is that speech categories are learned reflexively. To this end, we examined the relative contribution of these learning systems to speech learning in adults. Native English speakers learned to categorize Mandarin tone categories over 480 trials. The training protocol involved trial-by-trial feedback and multiple talkers. Experiments 1 and 2 examined the effect of manipulating the timing (immediate vs. delayed) and information content (full vs. minimal) of feedback. Dual-system models of visual category learning predict that delayed feedback and providing rich, informational feedback enhance reflective learning, while immediate and minimally informative feedback enhance reflexive learning. Across the two experiments, our results show that feedback manipulations that targeted reflexive learning enhanced category learning success. In Experiment 3, we examined the role of trial-to-trial talker information (mixed vs. blocked presentation) on speech category learning success. We hypothesized that the mixed condition would enhance reflexive learning by not allowing an association between talker-related acoustic cues and speech categories. Our results show that the mixed talker condition led to relatively greater accuracies. Our experiments demonstrate that speech categories are optimally learned by training methods that target the reflexive learning system.
Collapse
|
32
|
Lau C, Pienkowski M, Zhang JW, McPherson B, Wu EX. Chronic exposure to broadband noise at moderate sound pressure levels spatially shifts tone-evoked responses in the rat auditory midbrain. Neuroimage 2015; 122:44-51. [DOI: 10.1016/j.neuroimage.2015.07.065] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2015] [Revised: 07/10/2015] [Accepted: 07/24/2015] [Indexed: 02/09/2023] Open
|
33
|
Ghazi-Saidi L, Dash T, Ansaldo AI. How native-like can you possibly get: fMRI evidence for processing accent. Front Hum Neurosci 2015; 9:587. [PMID: 26578931 PMCID: PMC4626569 DOI: 10.3389/fnhum.2015.00587] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2015] [Accepted: 10/09/2015] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION If ever attained, adopting native-like accent is achieved late in the learning process. Resemblance between L2 and mother tongue can facilitate L2 learning. In particular, cognates (phonologically and semantically similar words across languages), offer the opportunity to examine the issue of foreign accent in quite a unique manner. METHODS Twelve Spanish speaking (L1) adults learnt French (L2) cognates and practiced their native-like pronunciation by means of a computerized method. After consolidation, they were tested on L1 and L2 oral picture- naming during fMRI scanning. RESULTS AND DISCUSSION The results of the present study show that there is a specific impact of accent on brain activation, even if L2 words are cognates, and belong to a pair of closely related languages. Results point that the insula is a key component of accent processing, which is in line with reports from patients with foreign accent syndrome following damage to the insula (e.g., Katz et al., 2012; Moreno-Torres et al., 2013; Tomasino et al., 2013), and healthy L2 learners (Chee et al., 2004). Thus, the left insula has been consistently related to the integration of attentional and working memory abilities, together with fine-tuning of motor programming to achieve optimal articulation.
Collapse
Affiliation(s)
- Ladan Ghazi-Saidi
- Centre de Recherche de I’Institut Universitaire de Gériatrie de Montréal, University of MontrealMontreal, QC, Canada
| | - Tanya Dash
- Centre de Recherche de I’Institut Universitaire de Gériatrie de Montréal, University of MontrealMontreal, QC, Canada
| | - Ana I. Ansaldo
- Centre de Recherche de I’Institut Universitaire de Gériatrie de Montréal, University of MontrealMontreal, QC, Canada
- Faculté de Médecine, Université de MontréalMontreal, QC, Canada
| |
Collapse
|
34
|
Chen Z, Wong FCK, Jones JA, Li W, Liu P, Chen X, Liu H. Transfer Effect of Speech-sound Learning on Auditory-motor Processing of Perceived Vocal Pitch Errors. Sci Rep 2015; 5:13134. [PMID: 26278337 PMCID: PMC4538572 DOI: 10.1038/srep13134] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 07/20/2015] [Indexed: 11/28/2022] Open
Abstract
Speech perception and production are intimately linked. There is evidence that speech motor learning results in changes to auditory processing of speech. Whether speech motor control benefits from perceptual learning in speech, however, remains unclear. This event-related potential study investigated whether speech-sound learning can modulate the processing of feedback errors during vocal pitch regulation. Mandarin speakers were trained to perceive five Thai lexical tones while learning to associate pictures with spoken words over 5 days. Before and after training, participants produced sustained vowel sounds while they heard their vocal pitch feedback unexpectedly perturbed. As compared to the pre-training session, the magnitude of vocal compensation significantly decreased for the control group, but remained consistent for the trained group at the post-training session. However, the trained group had smaller and faster N1 responses to pitch perturbations and exhibited enhanced P2 responses that correlated significantly with their learning performance. These findings indicate that the cortical processing of vocal pitch regulation can be shaped by learning new speech-sound associations, suggesting that perceptual learning in speech can produce transfer effects to facilitating the neural mechanisms underlying the online monitoring of auditory feedback regarding vocal production.
Collapse
Affiliation(s)
- Zhaocong Chen
- 1] Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China [2] Department of Rehabilitation Medicine, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510630, China
| | - Francis C K Wong
- Division of Linguistics and Multilingual Studies, School of Humanities and Social Sciences, Nanyang Technological University, 14 Nanyang Drive, HSS-03-49, 637332, Singapore
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, Ontario, N2L 3C5, Canada
| | - Weifeng Li
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, 510080, China
| |
Collapse
|
35
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
36
|
Abstract
All spoken languages express words by sound patterns, and certain patterns (e.g., blog) are systematically preferred to others (e.g., lbog). What principles account for such preferences: does the language system encode abstract rules banning syllables like lbog, or does their dislike reflect the increased motor demands associated with speech production? More generally, we ask whether linguistic knowledge is fully embodied or whether some linguistic principles could potentially be abstract. To address this question, here we gauge the sensitivity of English speakers to the putative universal syllable hierarchy (e.g., blif ≻ bnif ≻ bdif ≻ lbif) while undergoing transcranial magnetic stimulation (TMS) over the cortical motor representation of the left orbicularis oris muscle. If syllable preferences reflect motor simulation, then worse-formed syllables (e.g., lbif) should (i) elicit more errors; (ii) engage more strongly motor brain areas; and (iii) elicit stronger effects of TMS on these motor regions. In line with the motor account, we found that repetitive TMS pulses impaired participants' global sensitivity to the number of syllables, and functional MRI confirmed that the cortical stimulation site was sensitive to the syllable hierarchy. Contrary to the motor account, however, ill-formed syllables were least likely to engage the lip sensorimotor area and they were least impaired by TMS. Results suggest that speech perception automatically triggers motor action, but this effect is not causally linked to the computation of linguistic structure. We conclude that the language and motor systems are intimately linked, yet distinct. Language is designed to optimize motor action, but its knowledge includes principles that are disembodied and potentially abstract.
Collapse
|
37
|
Lau C, Zhang JW, McPherson B, Pienkowski M, Wu EX. Long-term, passive exposure to non-traumatic acoustic noise induces neural adaptation in the adult rat medial geniculate body and auditory cortex. Neuroimage 2015; 107:1-9. [DOI: 10.1016/j.neuroimage.2014.11.048] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2014] [Revised: 11/12/2014] [Accepted: 11/22/2014] [Indexed: 02/02/2023] Open
|
38
|
Engineer CT, Engineer ND, Riley JR, Seale JD, Kilgard MP. Pairing Speech Sounds With Vagus Nerve Stimulation Drives Stimulus-specific Cortical Plasticity. Brain Stimul 2015; 8:637-44. [PMID: 25732785 DOI: 10.1016/j.brs.2015.01.408] [Citation(s) in RCA: 60] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2014] [Revised: 12/17/2014] [Accepted: 01/19/2015] [Indexed: 12/15/2022] Open
Abstract
BACKGROUND Individuals with communication disorders, such as aphasia, exhibit weak auditory cortex responses to speech sounds and language impairments. Previous studies have demonstrated that pairing vagus nerve stimulation (VNS) with tones or tone trains can enhance both the spectral and temporal processing of sounds in auditory cortex, and can be used to reverse pathological primary auditory cortex (A1) plasticity in a rodent model of chronic tinnitus. OBJECTIVE/HYPOTHESIS We predicted that pairing VNS with speech sounds would strengthen the A1 response to the paired speech sounds. METHODS The speech sounds 'rad' and 'lad' were paired with VNS three hundred times per day for twenty days. A1 responses to both paired and novel speech sounds were recorded 24 h after the last VNS pairing session in anesthetized rats. Response strength, latency and neurometric decoding were compared between VNS speech paired and control rats. RESULTS Our results show that VNS paired with speech sounds strengthened the auditory cortex response to the paired sounds, but did not strengthen the amplitude of the response to novel speech sounds. Responses to the paired sounds were faster and less variable in VNS speech paired rats compared to control rats. Neural plasticity that was specific to the frequency, intensity, and temporal characteristics of the paired speech sounds resulted in enhanced neural detection. CONCLUSION VNS speech sound pairing provides a novel method to enhance speech sound processing in the central auditory system. Delivery of VNS during speech therapy could improve outcomes in individuals with receptive language deficits.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA.
| | - Navzer D Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; MicroTransponder Inc., 2802 Flintrock Trace Suite 225, Austin, TX 78738, USA
| | - Jonathan R Riley
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Jonathan D Seale
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road GR41, Richardson, TX 75080, USA; Texas Biomedical Device Center, The University of Texas at Dallas, 800 West Campbell Road EC39, Richardson, TX 75080, USA
| |
Collapse
|
39
|
Callan D, Callan A, Jones JA. Speech motor brain regions are differentially recruited during perception of native and foreign-accented phonemes for first and second language listeners. Front Neurosci 2014; 8:275. [PMID: 25232302 PMCID: PMC4153045 DOI: 10.3389/fnins.2014.00275] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2014] [Accepted: 08/14/2014] [Indexed: 11/13/2022] Open
Abstract
Brain imaging studies indicate that speech motor areas are recruited for auditory speech perception, especially when intelligibility is low due to environmental noise or when speech is accented. The purpose of the present study was to determine the relative contribution of brain regions to the processing of speech containing phonetic categories from one's own language, speech with accented samples of one's native phonetic categories, and speech with unfamiliar phonetic categories. To that end, native English and Japanese speakers identified the speech sounds /r/ and /l/ that were produced by native English speakers (unaccented) and Japanese speakers (foreign-accented) while functional magnetic resonance imaging measured their brain activity. For native English speakers, the Japanese accented speech was more difficult to categorize than the unaccented English speech. In contrast, Japanese speakers have difficulty distinguishing between /r/ and /l/, so both the Japanese accented and English unaccented speech were difficult to categorize. Brain regions involved with listening to foreign-accented productions of a first language included primarily the right cerebellum, left ventral inferior premotor cortex PMvi, and Broca's area. Brain regions most involved with listening to a second-language phonetic contrast (foreign-accented and unaccented productions) also included the left PMvi and the right cerebellum. Additionally, increased activity was observed in the right PMvi, the left and right ventral superior premotor cortex PMvs, and the left cerebellum. These results support a role for speech motor regions during the perception of foreign-accented native speech and for perception of difficult second-language phonetic contrasts.
Collapse
Affiliation(s)
- Daniel Callan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan
| | - Akiko Callan
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Osaka University Osaka, Japan ; Multisensory Cognition and Computation Laboratory Universal Communication Research Institute, National Institute of Information and Communications Technology Kyoto, Japan
| | - Jeffery A Jones
- Laurier Centre for Cognitive Neuroscience and Department of Psychology, Wilfrid Laurier University Waterloo, ON, Canada
| |
Collapse
|
40
|
Maddox WT, Chandrasekaran B, Smayda K, Yi HG, Koslov S, Beevers CG. Elevated depressive symptoms enhance reflexive but not reflective auditory category learning. Cortex 2014; 58:186-98. [PMID: 25041936 PMCID: PMC4130789 DOI: 10.1016/j.cortex.2014.06.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2014] [Revised: 05/09/2014] [Accepted: 06/12/2014] [Indexed: 11/22/2022]
Abstract
In vision an extensive literature supports the existence of competitive dual-processing systems of category learning that are grounded in neuroscience and are partially-dissociable. The reflective system is prefrontally-mediated and uses working memory and executive attention to develop and test rules for classifying in an explicit fashion. The reflexive system is striatally-mediated and operates by implicitly associating perception with actions that lead to reinforcement. Although categorization is fundamental to auditory processing, little is known about the learning systems that mediate auditory categorization and even less is known about the effects of individual difference in the relative efficiency of the two learning systems. Previous studies have shown that individuals with elevated depressive symptoms show deficits in reflective processing. We exploit this finding to test critical predictions of the dual-learning systems model in audition. Specifically, we examine the extent to which the two systems are dissociable and competitive. We predicted that elevated depressive symptoms would lead to reflective-optimal learning deficits but reflexive-optimal learning advantages. Because natural speech category learning is reflexive in nature, we made the prediction that elevated depressive symptoms would lead to superior speech learning. In support of our predictions, individuals with elevated depressive symptoms showed a deficit in reflective-optimal auditory category learning, but an advantage in reflexive-optimal auditory category learning. In addition, individuals with elevated depressive symptoms showed an advantage in learning a non-native speech category structure. Computational modeling suggested that the elevated depressive symptom advantage was due to faster, more accurate, and more frequent use of reflexive category learning strategies in individuals with elevated depressive symptoms. The implications of this work for dual-process approach to auditory learning and depression are discussed.
Collapse
Affiliation(s)
| | | | | | - Han-Gyol Yi
- Department of Communication Sciences and Disorders, Austin, TX, 78712, USA.
| | - Seth Koslov
- Department of Psychology, Austin, TX, 78712, USA.
| | | |
Collapse
|
41
|
Myers EB. Emergence of category-level sensitivities in non-native speech sound learning. Front Neurosci 2014; 8:238. [PMID: 25152708 PMCID: PMC4125857 DOI: 10.3389/fnins.2014.00238] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2014] [Accepted: 07/20/2014] [Indexed: 11/23/2022] Open
Abstract
Over the course of development, speech sounds that are contrastive in one's native language tend to become perceived categorically: that is, listeners are unaware of variation within phonetic categories while showing excellent sensitivity to speech sounds that span linguistically meaningful phonetic category boundaries. The end stage of this developmental process is that the perceptual systems that handle acoustic-phonetic information show special tuning to native language contrasts, and as such, category-level information appears to be present at even fairly low levels of the neural processing stream. Research on adults acquiring non-native speech categories offers an avenue for investigating the interplay of category-level information and perceptual sensitivities to these sounds as speech categories emerge. In particular, one can observe the neural changes that unfold as listeners learn not only to perceive acoustic distinctions that mark non-native speech sound contrasts, but also to map these distinctions onto category-level representations. An emergent literature on the neural basis of novel and non-native speech sound learning offers new insight into this question. In this review, I will examine this literature in order to answer two key questions. First, where in the neural pathway does sensitivity to category-level phonetic information first emerge over the trajectory of speech sound learning? Second, how do frontal and temporal brain areas work in concert over the course of non-native speech sound learning? Finally, in the context of this literature I will describe a model of speech sound learning in which rapidly-adapting access to categorical information in the frontal lobes modulates the sensitivity of stable, slowly-adapting responses in the temporal lobes.
Collapse
Affiliation(s)
- Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut Storrs, CT, USA ; Department of Psychology, University of Connecticut Storrs, CT, USA ; Haskins Laboratories New Haven, CT, USA
| |
Collapse
|
42
|
Lim SJ, Fiez JA, Holt LL. How may the basal ganglia contribute to auditory categorization and speech perception? Front Neurosci 2014; 8:230. [PMID: 25136291 PMCID: PMC4117994 DOI: 10.3389/fnins.2014.00230] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2014] [Accepted: 07/13/2014] [Indexed: 02/01/2023] Open
Abstract
Listeners must accomplish two complementary perceptual feats in extracting a message from speech. They must discriminate linguistically-relevant acoustic variability and generalize across irrelevant variability. Said another way, they must categorize speech. Since the mapping of acoustic variability is language-specific, these categories must be learned from experience. Thus, understanding how, in general, the auditory system acquires and represents categories can inform us about the toolbox of mechanisms available to speech perception. This perspective invites consideration of findings from cognitive neuroscience literatures outside of the speech domain as a means of constraining models of speech perception. Although neurobiological models of speech perception have mainly focused on cerebral cortex, research outside the speech domain is consistent with the possibility of significant subcortical contributions in category learning. Here, we review the functional role of one such structure, the basal ganglia. We examine research from animal electrophysiology, human neuroimaging, and behavior to consider characteristics of basal ganglia processing that may be advantageous for speech category learning. We also present emerging evidence for a direct role for basal ganglia in learning auditory categories in a complex, naturalistic task intended to model the incidental manner in which speech categories are acquired. To conclude, we highlight new research questions that arise in incorporating the broader neuroscience research literature in modeling speech perception, and suggest how understanding contributions of the basal ganglia can inform attempts to optimize training protocols for learning non-native speech categories in adulthood.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA
| | - Julie A Fiez
- Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA ; Department of Neuroscience, Center for Neuroscience, University of Pittsburgh Pittsburgh, PA, USA ; Department of Psychology, University of Pittsburgh Pittsburgh, PA, USA
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University Pittsburgh, PA, USA ; Department of Neuroscience, Center for the Neural Basis of Cognition, University of Pittsburgh Pittsburgh, PA, USA ; Department of Neuroscience, Center for Neuroscience, University of Pittsburgh Pittsburgh, PA, USA
| |
Collapse
|
43
|
Engineer CT, Perez CA, Carraway RS, Chang KQ, Roland JL, Kilgard MP. Speech training alters tone frequency tuning in rat primary auditory cortex. Behav Brain Res 2014; 258:166-78. [PMID: 24344364 DOI: 10.1016/j.bbr.2013.10.021] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Previous studies in both humans and animals have documented improved performance following discrimination training. This enhanced performance is often associated with cortical response changes. In this study, we tested the hypothesis that long-term speech training on multiple tasks can improve primary auditory cortex (A1) responses compared to rats trained on a single speech discrimination task or experimentally naïve rats. Specifically, we compared the percent of A1 responding to trained sounds, the responses to both trained and untrained sounds, receptive field properties of A1 neurons, and the neural discrimination of pairs of speech sounds in speech trained and naïve rats. Speech training led to accurate discrimination of consonant and vowel sounds, but did not enhance A1 response strength or the neural discrimination of these sounds. Speech training altered tone responses in rats trained on six speech discrimination tasks but not in rats trained on a single speech discrimination task. Extensive speech training resulted in broader frequency tuning, shorter onset latencies, a decreased driven response to tones, and caused a shift in the frequency map to favor tones in the range where speech sounds are the loudest. Both the number of trained tasks and the number of days of training strongly predict the percent of A1 responding to a low frequency tone. Rats trained on a single speech discrimination task performed less accurately than rats trained on multiple tasks and did not exhibit A1 response changes. Our results indicate that extensive speech training can reorganize the A1 frequency map, which may have downstream consequences on speech sound processing.
Collapse
|
44
|
Abstract
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.
Collapse
|
45
|
Lametti DR, Krol SA, Shiller DM, Ostry DJ. Brief periods of auditory perceptual training can determine the sensory targets of speech motor learning. Psychol Sci 2014; 25:1325-36. [PMID: 24815610 DOI: 10.1177/0956797614529978] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 03/07/2014] [Indexed: 11/15/2022] Open
Abstract
The perception of speech is notably malleable in adults, yet alterations in perception seem to have little impact on speech production. However, we hypothesized that speech perceptual training might immediately influence speech motor learning. To test this, we paired a speech perceptual-training task with a speech motor-learning task. Subjects performed a series of perceptual tests designed to measure and then manipulate the perceptual distinction between the words head and had. Subjects then produced head with the sound of the vowel altered in real time so that they heard themselves through headphones producing a word that sounded more like had. In support of our hypothesis, the amount of motor learning in response to the voice alterations depended on the perceptual boundary acquired through perceptual training. The studies show that plasticity in adults' speech perception can have immediate consequences for speech production in the context of speech learning.
Collapse
Affiliation(s)
- Daniel R Lametti
- Department of Psychology, McGill University Institute of Neurology, University College London
| | | | - Douglas M Shiller
- Research Center, Sainte-Justine Hospital, Université de Montréal School of Speech Pathology and Audiology, Université de Montréal Centre for Research on Brain, Language and Music, McGill University
| | - David J Ostry
- Department of Psychology, McGill University Haskins Laboratories, New Haven, Connecticut
| |
Collapse
|
46
|
Hertz U, Amedi A. Flexibility and Stability in Sensory Processing Revealed Using Visual-to-Auditory Sensory Substitution. Cereb Cortex 2014; 25:2049-64. [PMID: 24518756 PMCID: PMC4494022 DOI: 10.1093/cercor/bhu010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
The classical view of sensory processing involves independent processing in sensory cortices and multisensory integration in associative areas. This hierarchical structure has been challenged by evidence of multisensory responses in sensory areas, and dynamic weighting of sensory inputs in associative areas, thus far reported independently. Here, we used a visual-to-auditory sensory substitution algorithm (SSA) to manipulate the information conveyed by sensory inputs while keeping the stimuli intact. During scan sessions before and after SSA learning, subjects were presented with visual images and auditory soundscapes. The findings reveal 2 dynamic processes. First, crossmodal attenuation of sensory cortices changed direction after SSA learning from visual attenuations of the auditory cortex to auditory attenuations of the visual cortex. Secondly, associative areas changed their sensory response profile from strongest response for visual to that for auditory. The interaction between these phenomena may play an important role in multisensory processing. Consistent features were also found in the sensory dominance in sensory areas and audiovisual convergence in associative area Middle Temporal Gyrus. These 2 factors allow for both stability and a fast, dynamic tuning of the system when required.
Collapse
Affiliation(s)
- Uri Hertz
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem 91220, Israel Interdisciplinary Center for Neural Computation, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem 91905, Israel
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem 91220, Israel Interdisciplinary Center for Neural Computation, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem 91905, Israel
| |
Collapse
|
47
|
Di X, Rypma B, Biswal BB. Correspondence of executive function related functional and anatomical alterations in aging brain. Prog Neuropsychopharmacol Biol Psychiatry 2014; 48:41-50. [PMID: 24036319 PMCID: PMC3870052 DOI: 10.1016/j.pnpbp.2013.09.001] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Revised: 08/19/2013] [Accepted: 09/03/2013] [Indexed: 11/28/2022]
Abstract
Neurocognitive aging studies have focused on age-related changes in neural activity or neural structure but few studies have focused on relationships between the two. The present study quantitatively reviewed 24 studies of age-related changes in fMRI activation across a broad spectrum of executive function tasks using activation likelihood estimation (ALE) and 22 separate studies of age-related changes in gray matter using voxel-based morphometry (VBM). Conjunction analyses between functional and structural alteration maps were constructed. Overlaps were only observed in the conjunction of dorsolateral prefrontal cortex (DLPFC) gray matter reduction and functional hyperactivation but not hypoactivation. It was not evident that the conjunctions between gray matter and activation were related to task performance. Theoretical implications of these results are discussed.
Collapse
Affiliation(s)
- Xin Di
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ 07101, USA.
| | - Bart Rypma
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, TX, 75080, USA
| | - Bharat B. Biswal
- Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ, 07101, USA
| |
Collapse
|
48
|
White EJ, Hutka SA, Williams LJ, Moreno S. Learning, neural plasticity and sensitive periods: implications for language acquisition, music training and transfer across the lifespan. Front Syst Neurosci 2013; 7:90. [PMID: 24312022 PMCID: PMC3834520 DOI: 10.3389/fnsys.2013.00090] [Citation(s) in RCA: 49] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Accepted: 10/29/2013] [Indexed: 01/27/2023] Open
Abstract
Sensitive periods in human development have often been proposed to explain age-related differences in the attainment of a number of skills, such as a second language (L2) and musical expertise. It is difficult to reconcile the negative consequence this traditional view entails for learning after a sensitive period with our current understanding of the brain's ability for experience-dependent plasticity across the lifespan. What is needed is a better understanding of the mechanisms underlying auditory learning and plasticity at different points in development. Drawing on research in language development and music training, this review examines not only what we learn and when we learn it, but also how learning occurs at different ages. First, we discuss differences in the mechanism of learning and plasticity during and after a sensitive period by examining how language exposure versus training forms language-specific phonetic representations in infants and adult L2 learners, respectively. Second, we examine the impact of musical training that begins at different ages on behavioral and neural indices of auditory and motor processing as well as sensorimotor integration. Third, we examine the extent to which childhood training in one auditory domain can enhance processing in another domain via the transfer of learning between shared neuro-cognitive systems. Specifically, we review evidence for a potential bi-directional transfer of skills between music and language by examining how speaking a tonal language may enhance music processing and, conversely, how early music training can enhance language processing. We conclude with a discussion of the role of attention in auditory learning for learning during and after sensitive periods and outline avenues of future research.
Collapse
Affiliation(s)
- Erin J. White
- Rotman Research Institute, BaycrestToronto, ON, Canada
| | | | | | | |
Collapse
|
49
|
Zatorre RJ. Predispositions and plasticity in music and speech learning: neural correlates and implications. Science 2013; 342:585-9. [PMID: 24179219 DOI: 10.1126/science.1238414] [Citation(s) in RCA: 97] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Speech and music are remarkable aspects of human cognition and sensory-motor processing. Cognitive neuroscience has focused on them to understand how brain function and structure are modified by learning. Recent evidence indicates that individual differences in anatomical and functional properties of the neural architecture also affect learning and performance in these domains. Here, neuroimaging findings are reviewed that reiterate evidence of experience-dependent brain plasticity, but also point to the predictive validity of such data in relation to new learning in speech and music domains. Indices of neural sensitivity to certain stimulus features have been shown to predict individual rates of learning; individual network properties of brain activity are especially relevant in this regard, as they may reflect anatomical connectivity. Similarly, numerous studies have shown that anatomical features of auditory cortex and other structures, and their anatomical connectivity, are predictive of new sensory-motor learning ability. Implications of this growing body of literature are discussed.
Collapse
Affiliation(s)
- Robert J Zatorre
- Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, QC H3A 2B4 Canada
| |
Collapse
|
50
|
Patel AD. Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis. Hear Res 2013; 308:98-108. [PMID: 24055761 DOI: 10.1016/j.heares.2013.08.011] [Citation(s) in RCA: 157] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/19/2013] [Revised: 08/18/2013] [Accepted: 08/26/2013] [Indexed: 10/26/2022]
Abstract
A growing body of research suggests that musical training has a beneficial impact on speech processing (e.g., hearing of speech in noise and prosody perception). As this research moves forward two key questions need to be addressed: 1) Can purely instrumental musical training have such effects? 2) If so, how and why would such effects occur? The current paper offers a conceptual framework for understanding such effects based on mechanisms of neural plasticity. The expanded OPERA hypothesis proposes that when music and speech share sensory or cognitive processing mechanisms in the brain, and music places higher demands on these mechanisms than speech does, this sets the stage for musical training to enhance speech processing. When these higher demands are combined with the emotional rewards of music, the frequent repetition that musical training engenders, and the focused attention that it requires, neural plasticity is activated and makes lasting changes in brain structure and function which impact speech processing. Initial data from a new study motivated by the OPERA hypothesis is presented, focusing on the impact of musical training on speech perception in cochlear-implant users. Suggestions for the development of animal models to test OPERA are also presented, to help motivate neurophysiological studies of how auditory training using non-biological sounds can impact the brain's perceptual processing of species-specific vocalizations. This article is part of a Special Issue entitled <Music: A window into the hearing brain>.
Collapse
Affiliation(s)
- Aniruddh D Patel
- Dept. of Psychology, Tufts University, 490 Boston Ave., Medford, MA 02155, USA.
| |
Collapse
|