101
|
Chandrasekaran B, Chan AHD, Wong PCM. Neural Processing of What and Who Information in Speech. J Cogn Neurosci 2011; 23:2690-700. [DOI: 10.1162/jocn.2011.21631] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Human speech is composed of two types of information, related to content (lexical information, i.e., “what” is being said [e.g., words]) and to the speaker (indexical information, i.e., “who” is talking [e.g., voices]). The extent to which lexical versus indexical information is represented separately or integrally in the brain is unresolved. In the current experiment, we use short-term fMRI adaptation to address this issue. Participants performed a loudness judgment task during which single or multiple sets of words/pseudowords were repeated with single (repeat) or multiple talkers (speaker-change) conditions while BOLD responses were collected. As reflected by adaptation fMRI, the left posterior middle temporal gyrus, a crucial component of the ventral auditory stream performing sound-to-meaning computations (“what” pathway), showed sensitivity to lexical as well as indexical information. Previous studies have suggested that speaker information is abstracted during this stage of auditory word processing. Here, we demonstrate that indexical information is strongly coupled with word information. These findings are consistent with a plethora of behavioral results that have demonstrated that changes to speaker-related information can influence lexical processing.
Collapse
Affiliation(s)
| | - Alice H. D. Chan
- 1Communication Neural Systems Group, Evanston, IL
- 2Northwestern University, Evanston, IL
| | - Patrick C. M. Wong
- 1Communication Neural Systems Group, Evanston, IL
- 2Northwestern University, Evanston, IL
| |
Collapse
|
102
|
Abstract
UNLABELLED We report two sets of experiments showing that the large individual variability in language learning success in adults can be attributed to neurophysiological, neuroanatomical, cognitive, and perceptual factors. In the first set of experiments, native English-speaking adults learned to incorporate lexically meaningfully pitch patterns in words. We found those who were successful to have higher activation in bilateral auditory cortex, larger volume in Heschl's Gyrus, and more accurate pitch pattern perception. All of these measures were performed before training began. In the second set of experiments, native English-speaking adults learned a phonological grammatical system governing the formation of words of an artificial language. Again, neurophysiological, neuroanatomical, and cognitive factors predicted to an extent how well these adults learned. Taken together, these experiments suggest that neural and behavioral factors can be used to predict spoken language learning. These predictors can inform the redesign of existing training paradigms to maximize learning for learners with different learning profiles. LEARNING OUTCOMES Readers will be able to: (a) understand the linguistic concepts of lexical tone and phonological grammar, (b) identify the brain regions associated with learning lexical tone and phonological grammar, and (c) identify the cognitive predictors for successful learning of a tone language and phonological rules.
Collapse
Affiliation(s)
- Patrick C M Wong
- Roxelyn and Richard Pepper Department of Communication Sciences & Disorders, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | |
Collapse
|
103
|
Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cereb Cortex 2011; 22:1180-90. [PMID: 21799207 DOI: 10.1093/cercor/bhr196] [Citation(s) in RCA: 153] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
We investigated training-related improvements in listening in noise and the biological mechanisms mediating these improvements. Training-related malleability was examined using a program that incorporates cognitively based listening exercises to improve speech-in-noise perception. Before and after training, auditory brainstem responses to a speech syllable were recorded in quiet and multitalker noise from adults who ranged in their speech-in-noise perceptual ability. Controls did not undergo training but were tested at intervals equivalent to the trained subjects. Trained subjects exhibited significant improvements in speech-in-noise perception that were retained 6 months later. Subcortical responses in noise demonstrated training-related enhancements in the encoding of pitch-related cues (the fundamental frequency and the second harmonic), particularly for the time-varying portion of the syllable that is most vulnerable to perceptual disruption (the formant transition region). Subjects with the largest strength of pitch encoding at pretest showed the greatest perceptual improvement. Controls exhibited neither neurophysiological nor perceptual changes. We provide the first demonstration that short-term training can improve the neural representation of cues important for speech-in-noise perception. These results implicate and delineate biological mechanisms contributing to learning success, and they provide a conceptual advance to our understanding of the kind of training experiences that can influence sensory processing in adulthood.
Collapse
Affiliation(s)
- Judy H Song
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | | | | | | |
Collapse
|
104
|
Nath AR, Beauchamp MS. A neural basis for interindividual differences in the McGurk effect, a multisensory speech illusion. Neuroimage 2011; 59:781-7. [PMID: 21787869 DOI: 10.1016/j.neuroimage.2011.07.024] [Citation(s) in RCA: 167] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2011] [Revised: 07/05/2011] [Accepted: 07/10/2011] [Indexed: 11/30/2022] Open
Abstract
The McGurk effect is a compelling illusion in which humans perceive mismatched audiovisual speech as a completely different syllable. However, some normal individuals do not experience the illusion, reporting that the stimulus sounds the same with or without visual input. Converging evidence suggests that the left superior temporal sulcus (STS) is critical for audiovisual integration during speech perception. We used blood-oxygen level dependent functional magnetic resonance imaging (BOLD fMRI) to measure brain activity as McGurk perceivers and non-perceivers were presented with congruent audiovisual syllables, McGurk audiovisual syllables, and non-McGurk incongruent syllables. The inferior frontal gyrus showed an effect of stimulus condition (greater responses for incongruent stimuli) but not susceptibility group, while the left auditory cortex showed an effect of susceptibility group (greater response in susceptible individuals) but not stimulus condition. Only one brain region, the left STS, showed a significant effect of both susceptibility and stimulus condition. The amplitude of the response in the left STS was significantly correlated with the likelihood of perceiving the McGurk effect: a weak STS response meant that a subject was less likely to perceive the McGurk effect, while a strong response meant that a subject was more likely to perceive it. These results suggest that the left STS is a key locus for interindividual differences in speech perception.
Collapse
Affiliation(s)
- Audrey R Nath
- Department of Neurobiology and Anatomy, University of Texas Medical School at Houston, Houston TX 77030, USA
| | | |
Collapse
|
105
|
Dorjee D, Bowers JS. What can fMRI tell us about the locus of learning? Cortex 2011; 48:509-14. [PMID: 21802075 DOI: 10.1016/j.cortex.2011.06.016] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2010] [Revised: 06/17/2011] [Accepted: 06/22/2011] [Indexed: 12/01/2022]
Affiliation(s)
- Dusana Dorjee
- School of Psychology, Bangor University, Bangor, Wales, UK.
| | | |
Collapse
|
106
|
Elmer S, Meyer M, Jäncke L. Neurofunctional and behavioral correlates of phonetic and temporal categorization in musically trained and untrained subjects. Cereb Cortex 2011; 22:650-8. [PMID: 21680844 DOI: 10.1093/cercor/bhr142] [Citation(s) in RCA: 67] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
The perception of rapidly changing verbal and nonverbal auditory patterns is a fundamental prerequisite for speech and music processing. Previously, the left planum temporale (PT) has been consistently shown to support the discrimination of fast changing verbal and nonverbal sounds. Furthermore, it has been repeatedly shown that the functional and structural architecture of this supratemporal brain region differs as a function of musical training. In the present study, we used the functional magnetic resonance imaging technique, in a sample of professional musicians and nonmusicians, in order to examine the functional contribution of the left PT to the categorization of consonant-vowel syllables and their reduced-spectrum analogues. In line with our hypothesis, the musicians showed enhanced brain responses in the left PT and superior discrimination abilities in the reduced-spectrum condition. Moreover, we found a positive correlation between the responsiveness of the left PT and the performance in the reduced-spectrum condition across all subjects irrespective of musical expertise. These results have implications for our understanding of musical expertise in relation to segmental speech processing.
Collapse
Affiliation(s)
- Stefan Elmer
- Division of Neuropsychology, Institute of Psychology, University of Zurich, CH-8050 Zurich, Switzerland.
| | | | | |
Collapse
|
107
|
Wong FCK, Chandrasekaran B, Garibaldi K, Wong PCM. White matter anisotropy in the ventral language pathway predicts sound-to-word learning success. J Neurosci 2011; 31:8780-5. [PMID: 21677162 PMCID: PMC3142920 DOI: 10.1523/jneurosci.0999-11.2011] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2011] [Revised: 04/12/2011] [Accepted: 05/04/2011] [Indexed: 11/21/2022] Open
Abstract
According to the dual stream model of auditory language processing, the dorsal stream is responsible for mapping sound to articulation and the ventral stream plays the role of mapping sound to meaning. Most researchers agree that the arcuate fasciculus (AF) is the neuroanatomical correlate of the dorsal steam; however, less is known about what constitutes the ventral one. Nevertheless, two hypotheses exist: one suggests that the segment of the AF that terminates in middle temporal gyrus corresponds to the ventral stream, and the other suggests that it is the extreme capsule that underlies this sound-to-meaning pathway. The goal of this study was to evaluate these two competing hypotheses. We trained participants with a sound-to-word learning paradigm in which they learned to use a foreign phonetic contrast for signaling word meaning. Using diffusion tensor imaging, a brain-imaging tool to investigate white matter connectivity in humans, we found that fractional anisotropy in the left parietal-temporal region positively correlated with the performance in sound-to-word learning. In addition, fiber tracking revealed a ventral pathway, composed of the extreme capsule and the inferior longitudinal fasciculus, that mediated auditory comprehension. Our findings provide converging evidence supporting the importance of the ventral steam, an extreme capsule system, in the frontal-temporal language network. Implications for current models of speech processing are also discussed.
Collapse
Affiliation(s)
- Francis C. K. Wong
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders and
| | | | - Kyla Garibaldi
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders and
| | - Patrick C. M. Wong
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders and
- Hugh Knowles Center for Clinical and Basic Science in Hearing and Its Disorders, Northwestern University, Evanston, Illinois 60208
| |
Collapse
|
108
|
Romei L, Wambacq IJA, Besing J, Koehnke J, Jerger J. Neural indices of spoken word processing in background multi-talker babble. Int J Audiol 2011; 50:321-33. [DOI: 10.3109/14992027.2010.547875] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
109
|
Kumar AU, Hegde M, Mayaleela. Perceptual learning of non-native speech contrast and functioning of the olivocochlear bundle. Int J Audiol 2010; 49:488-96. [PMID: 20528666 DOI: 10.3109/14992021003645894] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The purpose of this study was to investigate the relationship between perceptual learning of non-native speech sounds and strength of feedback in the medial olivocochlear bundle (MOCB). Discrimination abilities of non-native speech sounds (Malayalam) from its native counterparts (Hindi) were monitored during 12 days of training. Contralateral inhibition of otoacoustic emissions were measured on the first and twelfth day of training. Results suggested that training significantly improved reaction time and accuracy of identification of non-native speech sounds. There was a significant positive correlation between the slope (linear) of identification scores and change in distortion product otoacoustic emission inhibition at 3000 Hz. Findings suggest that during perceptual learning feedback from the MOCB may fine tune the brain stem and/or cochlea. However, such a change, isolated to a narrow frequency region, represents a limited effect and needs further exploration to confirm and/or extend any generalization of findings.
Collapse
Affiliation(s)
- Ajith U Kumar
- Department of Audiology and Speech Language Pathology, KMC, Attavara, Mangalore, India.
| | | | | |
Collapse
|
110
|
Kraus N, Chandrasekaran B. Music training for the development of auditory skills. Nat Rev Neurosci 2010; 11:599-605. [PMID: 20648064 DOI: 10.1038/nrn2882] [Citation(s) in RCA: 560] [Impact Index Per Article: 40.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
111
|
Chandrasekaran B, Sampath PD, Wong PCM. Individual variability in cue-weighting and lexical tone learning. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:456-65. [PMID: 20649239 PMCID: PMC2921440 DOI: 10.1121/1.3445785] [Citation(s) in RCA: 85] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/15/2023]
Abstract
Speech sound patterns can be discerned using multiple acoustic cues. The relative weighting of these cues is known to be language-specific. Speech-sound training in adults induces changes in cue-weighting such that relevant acoustic cues are emphasized. In the current study, the extent to which individual variability in cue weighting contributes to differential success in learning to use foreign sound patterns was examined. Sixteen English-speaking adult participants underwent a sound-to-meaning training paradigm, during which they learned to incorporate Mandarin linguistic pitch contours into words. In addition to cognitive tests, measures of pitch pattern discrimination and identification were collected from all participants. Reaction time data from the discrimination task was subjected to 3-way multidimensional scaling to extract dimensions underlying tone perception. Two dimensions relating to pitch height and pitch direction were found to underlie non-native tone space. Good learners attended more to pitch direction relative to poor learners, before and after training. Training increased the ability to identify and label pitch direction. The results demonstrate that variability in the ability to successfully learn to use pitch in lexical contexts can be explained by pre-training differences in cue-weighting.
Collapse
Affiliation(s)
- Bharath Chandrasekaran
- Roxelyn and Richard Pepper Department of Communication Sciences, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| | | | | |
Collapse
|
112
|
Veroude K, Norris DG, Shumskaya E, Gullberg M, Indefrey P. Functional connectivity between brain regions involved in learning words of a new language. BRAIN AND LANGUAGE 2010; 113:21-27. [PMID: 20116090 DOI: 10.1016/j.bandl.2009.12.005] [Citation(s) in RCA: 54] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2009] [Revised: 12/16/2009] [Accepted: 12/18/2009] [Indexed: 05/28/2023]
Abstract
Previous studies have identified several brain regions that appear to be involved in the acquisition of novel word forms. Standard word-by-word presentation is often used although exposure to a new language normally occurs in a natural, real world situation. In the current experiment we investigated naturalistic language exposure and applied a model-free analysis for hemodynamic-response data. Functional connectivity, temporal correlations between hemodynamic activity of different areas, was assessed during rest before and after presentation of a movie of a weather report in Mandarin Chinese to Dutch participants. We hypothesized that learning of novel words might be associated with stronger functional connectivity of regions that are involved in phonological processing. Participants were divided into two groups, learners and non-learners, based on the scores on a post hoc word recognition task. The learners were able to recognize Chinese target words from the weather report, while the non-learners were not. In the first resting state period, before presentation of the movie, stronger functional connectivity was observed for the learners compared to the non-learners between the left supplementary motor area and the left precentral gyrus as well as the left insula and the left rolandic operculum, regions that are important for phonological rehearsal. After exposure to the weather report, functional connectivity between the left and right supramarginal gyrus was stronger for learners than for non-learners. This is consistent with a role of the left supramarginal gyrus in the storage of phonological forms. These results suggest both pre-existing and learning-induced differences between the two groups.
Collapse
Affiliation(s)
- Kim Veroude
- Radboud University Nijmegen, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | | | | | | | | |
Collapse
|
113
|
Crinion JT, Green DW, Chung R, Ali N, Grogan A, Price GR, Mechelli A, Price CJ. Neuroanatomical markers of speaking Chinese. Hum Brain Mapp 2010; 30:4108-15. [PMID: 19530216 PMCID: PMC3261379 DOI: 10.1002/hbm.20832] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The aim of this study was to identify regional structural differences in the brains of native speakers of a tonal language (Chinese) compared to nontonal (European) language speakers. Our expectation was that there would be differences in regions implicated in pitch perception and production. We therefore compared structural brain images in three groups of participants: 31 who were native Chinese speakers; 7 who were native English speakers who had learnt Chinese in adulthood; and 21 European multilinguals who did not speak Chinese. The results identified two brain regions in the vicinity of the right anterior temporal lobe and the left insula where speakers of Chinese had significantly greater gray and white matter density compared with those who did not speak Chinese. Importantly, the effects were found in both native Chinese speakers and European subjects who learnt Chinese as a non‐native language, illustrating that they were language related and not ethnicity effects. On the basis of prior studies, we suggest that the locations of these gray and white matter changes in speakers of a tonal language are consistent with a role in linking the pitch of words to their meaning. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.
Collapse
Affiliation(s)
- Jenny T Crinion
- Wellcome Trust Centre for Neuroimaging, University College London, United Kingdom.
| | | | | | | | | | | | | | | |
Collapse
|
114
|
Rodríguez-Fornells A, Cunillera T, Mestres-Missé A, de Diego-Balaguer R. Neurophysiological mechanisms involved in language learning in adults. Philos Trans R Soc Lond B Biol Sci 2009; 364:3711-35. [PMID: 19933142 PMCID: PMC2846313 DOI: 10.1098/rstb.2009.0130] [Citation(s) in RCA: 131] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Little is known about the brain mechanisms involved in word learning during infancy and in second language acquisition and about the way these new words become stable representations that sustain language processing. In several studies we have adopted the human simulation perspective, studying the effects of brain-lesions and combining different neuroimaging techniques such as event-related potentials and functional magnetic resonance imaging in order to examine the language learning (LL) process. In the present article, we review this evidence focusing on how different brain signatures relate to (i) the extraction of words from speech, (ii) the discovery of their embedded grammatical structure, and (iii) how meaning derived from verbal contexts can inform us about the cognitive mechanisms underlying the learning process. We compile these findings and frame them into an integrative neurophysiological model that tries to delineate the major neural networks that might be involved in the initial stages of LL. Finally, we propose that LL simulations can help us to understand natural language processing and how the recovery from language disorders in infants and adults can be accomplished.
Collapse
|
115
|
Wong PCM, Perrachione TK, Margulis EH. Effects of asymmetric cultural experiences on the auditory pathway: evidence from music. Ann N Y Acad Sci 2009; 1169:157-63. [PMID: 19673772 DOI: 10.1111/j.1749-6632.2009.04548.x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Cultural experiences come in many different forms, such as immersion in a particular linguistic community, exposure to faces of people with different racial backgrounds, or repeated encounters with music of a particular tradition. In most circumstances, these cultural experiences are asymmetric, meaning one type of experience occurs more frequently than other types (e.g., a person raised in India will likely encounter the Indian todi scale more so than a Westerner). In this paper, we will discuss recent findings from our laboratories that reveal the impact of short- and long-term asymmetric musical experiences on how the nervous system responds to complex sounds. We will discuss experiments examining how musical experience may facilitate the learning of a tone language, how musicians develop neural circuitries that are sensitive to musical melodies played on their instrument of expertise, and how even everyday listeners who have little formal training are particularly sensitive to music of their own culture(s). An understanding of these cultural asymmetries is useful in formulating a more comprehensive model of auditory perceptual expertise that considers how experiences shape auditory skill levels. Such a model has the potential to aid in the development of rehabilitation programs for the efficacious treatment of neurologic impairments.
Collapse
Affiliation(s)
- Patrick C M Wong
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston/Chicago, Illinois 60208, USA.
| | | | | |
Collapse
|
116
|
Wong PCM, Perrachione TK, Gunasekera G, Chandrasekaran B. Communication disorders in speakers of tone languages: etiological bases and clinical considerations. Semin Speech Lang 2009; 30:162-73. [PMID: 19711234 DOI: 10.1055/s-0029-1225953] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
Abstract
Lexical tones are a phonetic contrast necessary for conveying meaning in a majority of the world's languages. Various hearing, speech, and language disorders affect the ability to perceive or produce lexical tones, thereby seriously impairing individuals' communicative abilities. The number of tone language speakers is increasing, even in otherwise English-speaking nations, yet insufficient emphasis has been placed on clinical assessment and rehabilitation of lexical tone disorders. The similarities and dissimilarities between lexical tones and other speech sounds make a richer scientific understanding of their physiological bases paramount to more effective remediation of speech and language disorders in general. Here we discuss the cognitive and biological bases of lexical tones, emphasizing the neural structures and networks that support their acquisition, perception, and cognitive representation. We present emerging research on lexical tone learning in the context of the clinical disorders of hearing, speech, and language that this body of research will help to address.
Collapse
Affiliation(s)
- Patrick C M Wong
- Communication Neural Systems Research Group, The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA.
| | | | | | | |
Collapse
|
117
|
Cunillera T, Càmara E, Toro JM, Marco-Pallares J, Sebastián-Galles N, Ortiz H, Pujol J, Rodríguez-Fornells A. Time course and functional neuroanatomy of speech segmentation in adults. Neuroimage 2009; 48:541-53. [PMID: 19580874 DOI: 10.1016/j.neuroimage.2009.06.069] [Citation(s) in RCA: 100] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2009] [Revised: 06/02/2009] [Accepted: 06/28/2009] [Indexed: 11/16/2022] Open
Abstract
The present investigation was devoted to unraveling the time-course and brain regions involved in speech segmentation, which is one of the first processes necessary for learning a new language in adults and infants. A specific brain electrical pattern resembling the N400 language component was identified as an indicator of speech segmentation of candidate words. This N400 trace was clearly elicited after a short exposure to the words of the new language and showed a decrease in amplitude with longer exposure. Two brain regions were observed to be active during this process: the posterior superior temporal gyrus and the superior part of the ventral premotor cortex. We interpret these findings as evidence for the existence of an auditory-motor interface that is responsible for isolating possible candidate words when learning a new language in adults.
Collapse
Affiliation(s)
- Toni Cunillera
- Department of Basic Psychology, Faculty of Psychology, University of Barcelona, 08035, Barcelona, Spain
| | | | | | | | | | | | | | | |
Collapse
|
118
|
Abstract
The way in which normal variations in human neuroanatomy relate to brain function remains largely uninvestigated. This study addresses the question by relating anatomical measurements of Heschl's gyrus (HG), the structure containing human primary auditory cortex, to how this region processes temporal and spectral acoustic information. In this study, subjects' right and left HG were identified and manually indicated on anatomical magnetic resonance imaging scans. Volumes of gray matter, white matter, and total gyrus were recorded, and asymmetry indices were calculated. Additionally, cortical auditory activity in response to noise stimuli varying orthogonally in temporal and spectral dimensions was assessed and related to the volumetric measurements. A high degree of anatomical variability was seen, consistent with other reports in the literature. The auditory cortical responses showed the expected leftward lateralization to varying rates of stimulus change and rightward lateralization of increasing spectral information. An explicit link between auditory structure and function is then established, in which anatomical variability of auditory cortex is shown to relate to individual differences in the way that cortex processes acoustic information. Specifically, larger volumes of left HG were associated with larger extents of rate-related cortex on the left, and larger volumes of right HG related to larger extents of spectral-related cortex on the right. This finding is discussed in relation to known microanatomical asymmetries of HG, including increased myelination of its fibers, and implications for language learning are considered.
Collapse
|
119
|
Margulis EH, Mlsna LM, Uppunda AK, Parrish TB, Wong PCM. Selective neurophysiologic responses to music in instrumentalists with different listening biographies. Hum Brain Mapp 2009; 30:267-75. [PMID: 18072277 DOI: 10.1002/hbm.20503] [Citation(s) in RCA: 84] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022] Open
Abstract
To appropriately adapt to constant sensory stimulation, neurons in the auditory system are tuned to various acoustic characteristics, such as center frequencies, frequency modulations, and their combinations, particularly those combinations that carry species-specific communicative functions. The present study asks whether such tunings extend beyond acoustic and communicative functions to auditory self-relevance and expertise. More specifically, we examined the role of the listening biography--an individual's long term experience with a particular type of auditory input--on perceptual-neural plasticity. Two groups of expert instrumentalists (violinists and flutists) listened to matched musical excerpts played on the two instruments (J.S. Bach Partitas for solo violin and flute) while their cerebral hemodynamic responses were measured using fMRI. Our experimental design allowed for a comprehensive investigation of the neurophysiology (cerebral hemodynamic responses as measured by fMRI) of auditory expertise (i.e., when violinists listened to violin music and when flutists listened to flute music) and nonexpertise (i.e., when subjects listened to music played on the other instrument). We found an extensive cerebral network of expertise, which implicates increased sensitivity to musical syntax (BA 44), timbre (auditory association cortex), and sound-motor interactions (precentral gyrus) when listening to music played on the instrument of expertise (the instrument for which subjects had a unique listening biography). These findings highlight auditory self-relevance and expertise as a mechanism of perceptual-neural plasticity, and implicate neural tuning that includes and extends beyond acoustic and communication-relevant structures.
Collapse
|
120
|
Chen C, Xue G, Mei L, Chen C, Dong Q. Cultural neurolinguistics. PROGRESS IN BRAIN RESEARCH 2009; 178:159-71. [PMID: 19874968 DOI: 10.1016/s0079-6123(09)17811-1] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
As the only species that evolved to possess a language faculty, humans have been surprisingly generative in creating a diverse array of language systems. These systems vary in phonology, morphology, syntax, and written forms. Before the advent of modern brain-imaging techniques, little was known about how differences across languages are reflected in the brain. This chapter aims to provide an overview of an emerging area of research - cultural neurolinguistics - that examines systematic cross-cultural/crosslinguistic variations in the neural networks of languages. We first briefly describe general brain networks for written and spoken languages. We then discuss language-specific brain regions by highlighting differences in neural bases of different scripts (logographic vs. alphabetic scripts), orthographies (transparent vs. nontransparent orthographies), and tonality (tonal vs. atonal languages). We also discuss neural basis of second language and the role of native language experience in second-language acquisition. In the last section, we outline a general model that integrates culture and neural bases of language and discuss future directions of research in this area.
Collapse
Affiliation(s)
- Chuansheng Chen
- Department of Psychology and Social Behavior, University of California, Irvine, CA, USA.
| | | | | | | | | |
Collapse
|
121
|
Song JH, Skoe E, Wong PCM, Kraus N. Plasticity in the adult human auditory brainstem following short-term linguistic training. J Cogn Neurosci 2008; 20:1892-902. [PMID: 18370594 DOI: 10.1162/jocn.2008.20131] [Citation(s) in RCA: 200] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Peripheral and central structures along the auditory pathway contribute to speech processing and learning. However, because speech requires the use of functionally and acoustically complex sounds which necessitates high sensory and cognitive demands, long-term exposure and experience using these sounds is often attributed to the neocortex with little emphasis placed on subcortical structures. The present study examines changes in the auditory brainstem, specifically the frequency following response (FFR), as native English-speaking adults learn to incorporate foreign speech sounds (lexical pitch patterns) in word identification. The FFR presumably originates from the auditory midbrain and can be elicited preattentively. We measured FFRs to the trained pitch patterns before and after training. Measures of pitch tracking were then derived from the FFR signals. We found increased accuracy in pitch tracking after training, including a decrease in the number of pitch-tracking errors and a refinement in the energy devoted to encoding pitch. Most interestingly, this change in pitch-tracking accuracy only occurred in the most acoustically complex pitch contour (dipping contour), which is also the least familiar to our English-speaking subjects. These results not only demonstrate the contribution of the brainstem in language learning and its plasticity in adulthood but also demonstrate the specificity of this contribution (i.e., changes in encoding only occur in specific, least familiar stimuli, not all stimuli). Our findings complement existing data showing cortical changes after second-language learning, and are consistent with models suggesting that brainstem changes resulting from perceptual learning are most apparent when acuity in encoding is most needed.
Collapse
|
122
|
Wong PCM, Jin JX, Gunasekera GM, Abel R, Lee ER, Dhar S. Aging and cortical mechanisms of speech perception in noise. Neuropsychologia 2008; 47:693-703. [PMID: 19124032 DOI: 10.1016/j.neuropsychologia.2008.11.032] [Citation(s) in RCA: 201] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Revised: 11/18/2008] [Accepted: 11/22/2008] [Indexed: 11/29/2022]
Abstract
Spoken language processing in noisy environments, a hallmark of the human brain, is subject to age-related decline, even when peripheral hearing might be intact. The present study examines the cortical cerebral hemodynamics (measured by fMRI) associated with such processing in the aging brain. Younger and older subjects identified single words in quiet and in two multi-talker babble noise conditions (SNR 20 and -5dB). Behaviorally, older and younger subjects did not show significant differences in the first two conditions but older adults performed less accurately in the SNR -5 condition. The fMRI results showed reduced activation in the auditory cortex but an increase in working memory and attention-related cortical areas (prefrontal and precuneus regions) in older subjects, especially in the SNR -5 condition. Increased cortical activities in general cognitive regions were positively correlated with behavioral performance in older listeners, suggestive of a compensatory strategy. Furthermore, inter-regional correlation revealed that while younger subjects showed a more streamlined cortical network of auditory regions in response to spoken word processing in noise, older subjects showed a more diffused network involving frontal and ventral brain regions. These results are consistent with the decline-compensation hypothesis, suggestive of its applicability to the auditory domain.
Collapse
Affiliation(s)
- Patrick C M Wong
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, 2240 Campus Drive, Evanston, IL 60208-3540, United States.
| | | | | | | | | | | |
Collapse
|
123
|
Kent TA, Rutherford DG, Breier JI, Papanicoloau AC. What is the evidence for use dependent learning after stroke? Stroke 2008; 40:S139-40. [PMID: 19064775 DOI: 10.1161/strokeaha.108.534925] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Thomas A Kent
- Department of Neurology, Baylor College of Medicine and the Michael E. DeBakey VAMC, 2002 Holcombe Blvd, Houston, TX 77005, USA.
| | | | | | | |
Collapse
|
124
|
Wong PCM, Uppunda AK, Parrish TB, Dhar S. Cortical mechanisms of speech perception in noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2008; 51:1026-1041. [PMID: 18658069 DOI: 10.1044/1092-4388(2008/075)] [Citation(s) in RCA: 102] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
PURPOSE The present study examines the brain basis of listening to spoken words in noise, which is a ubiquitous characteristic of communication, with the focus on the dorsal auditory pathway. METHOD English-speaking young adults identified single words in 3 listening conditions while their hemodynamic response was measured using fMRI: speech in quiet, speech in moderately loud noise (signal-to-noise ratio [SNR] 20 dB), and in loud noise (SNR -5 dB). RESULTS Behaviorally, participants' performance (both accuracy and reaction time) did not differ between the quiet and SNR 20 dB condition, whereas they were less accurate and responded slower in the SNR -5 dB condition compared with the other 2 conditions. In the superior temporal gyrus (STG), both left and right auditory cortex showed increased activation in the noise conditions relative to quiet, including the middle portion of STG (mSTG). Although the right posterior STG (pSTG) showed similar activation for the 2 noise conditions, the left pSTG showed increased activation in the SNR -5 dB condition relative to the SNR 20 dB condition. CONCLUSION We found cortical task-independent and noise-dependent effects concerning speech perception in noise involving bilateral mSTG and left pSTG. These results likely reflect demands in acoustic analysis, auditory-motor integration, and phonological memory, as well as auditory attention.
Collapse
Affiliation(s)
- Patrick C M Wong
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA.
| | | | | | | |
Collapse
|
125
|
Kaan E, Barkley CM, Bao M, Wayland R. Thai lexical tone perception in native speakers of Thai, English and Mandarin Chinese: an event-related potentials training study. BMC Neurosci 2008; 9:53. [PMID: 18573210 PMCID: PMC2483720 DOI: 10.1186/1471-2202-9-53] [Citation(s) in RCA: 53] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2007] [Accepted: 06/23/2008] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Tone languages such as Thai and Mandarin Chinese use differences in fundamental frequency (F0, pitch) to distinguish lexical meaning. Previous behavioral studies have shown that native speakers of a non-tone language have difficulty discriminating among tone contrasts and are sensitive to different F0 dimensions than speakers of a tone language. The aim of the present ERP study was to investigate the effect of language background and training on the non-attentive processing of lexical tones. EEG was recorded from 12 adult native speakers of Mandarin Chinese, 12 native speakers of American English, and 11 Thai speakers while they were watching a movie and were presented with multiple tokens of low-falling, mid-level and high-rising Thai lexical tones. High-rising or low-falling tokens were presented as deviants among mid-level standard tokens, and vice versa. EEG data and data from a behavioral discrimination task were collected before and after a two-day perceptual categorization training task. RESULTS Behavioral discrimination improved after training in both the Chinese and the English groups. Low-falling tone deviants versus standards elicited a mismatch negativity (MMN) in all language groups. Before, but not after training, the English speakers showed a larger MMN compared to the Chinese, even though English speakers performed worst in the behavioral tasks. The MMN was followed by a late negativity, which became smaller with improved discrimination. The High-rising deviants versus standards elicited a late negativity, which was left-lateralized only in the English and Chinese groups. CONCLUSION Results showed that native speakers of English, Chinese and Thai recruited largely similar mechanisms when non-attentively processing Thai lexical tones. However, native Thai speakers differed from the Chinese and English speakers with respect to the processing of late F0 contour differences (high-rising versus mid-level tones). In addition, native speakers of a non-tone language (English) were initially more sensitive to F0 onset differences (low-falling versus mid-level contrast), which was suppressed as a result of training. This result converges with results from previous behavioral studies and supports the view that attentive as well as non-attentive processing of F0 contrasts is affected by language background, but is malleable even in adult learners.
Collapse
Affiliation(s)
- Edith Kaan
- Linguistics, University of Florida, Box 115454, Gainesville, FL 32611, USA.
| | | | | | | |
Collapse
|
126
|
Abstract
The present fMRI study aimed to identify neurofunctional predictors of auditory word learning. Twenty-four native Chinese speakers were trained to learn a logographic artificial language (LAL) for 2 weeks and their behavioral performance was recorded. Participants were also scanned before and after the training while performing a passive listening task. Results showed that, compared to 'poor' learners (those whose performance was below average during the training), 'good' (i.e. above-average) learners showed more activation in the left MTG/STS and less activation in the right IFG during the pretraining scan. These results confirmed the hypothesis that preexisting individual differences in neural activities can predict the efficiency in learning words in a new language.
Collapse
|
127
|
Patel AD, Iversen JR. The linguistic benefits of musical abilities. Trends Cogn Sci 2007; 11:369-72. [PMID: 17698406 DOI: 10.1016/j.tics.2007.08.003] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2007] [Revised: 06/25/2007] [Accepted: 08/01/2007] [Indexed: 10/22/2022]
Abstract
Growing evidence points to a link between musical abilities and certain phonetic and prosodic skills in language. However, the mechanisms that underlie these relations are not well understood. A recent study by Wong et al. suggests that musical training sharpens the subcortical encoding of linguistic pitch patterns. We consider the implications of their methods and findings for establishing a link between musical training and phonetic abilities more generally.
Collapse
Affiliation(s)
- Aniruddh D Patel
- The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, CA 92121, USA.
| | | |
Collapse
|
128
|
Wong PCM, Warrier CM, Penhune VB, Roy AK, Sadehh A, Parrish TB, Zatorre RJ. Volume of left Heschl's Gyrus and linguistic pitch learning. Cereb Cortex 2007; 18:828-36. [PMID: 17652466 PMCID: PMC2805072 DOI: 10.1093/cercor/bhm115] [Citation(s) in RCA: 126] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Research on the contributions of the human nervous system to language processing and learning has generally been focused on the association regions of the brain without considering the possible contribution of primary and adjacent sensory areas. We report a study examining the relationship between the anatomy of Heschl's Gyrus (HG), which includes predominately primary auditory areas and is often found to be associated with nonlinguistic pitch processing and language learning. Unlike English, most languages of the world use pitch patterns to signal word meaning. In the present study, native English-speaking adult subjects learned to incorporate foreign pitch patterns in word identification. Subjects who were less successful in learning showed a smaller HG volume on the left (especially gray matter volume), but not on the right, relative to learners who were successful. These results suggest that HG, typically shown to be associated with the processing of acoustic cues in nonspeech processing, is also involved in speech learning. These results also suggest that primary auditory regions may be important for encoding basic acoustic cues during the course of spoken language learning.
Collapse
Affiliation(s)
- Patrick C M Wong
- The Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA.
| | | | | | | | | | | | | |
Collapse
|
129
|
Dediu D, Ladd DR. Linguistic tone is related to the population frequency of the adaptive haplogroups of two brain size genes, ASPM and Microcephalin. Proc Natl Acad Sci U S A 2007; 104:10944-9. [PMID: 17537923 PMCID: PMC1904158 DOI: 10.1073/pnas.0610848104] [Citation(s) in RCA: 170] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2006] [Indexed: 01/22/2023] Open
Abstract
The correlations between interpopulation genetic and linguistic diversities are mostly noncausal (spurious), being due to historical processes and geographical factors that shape them in similar ways. Studies of such correlations usually consider allele frequencies and linguistic groupings (dialects, languages, linguistic families or phyla), sometimes controlling for geographic, topographic, or ecological factors. Here, we consider the relation between allele frequencies and linguistic typological features. Specifically, we focus on the derived haplogroups of the brain growth and development-related genes ASPM and Microcephalin, which show signs of natural selection and a marked geographic structure, and on linguistic tone, the use of voice pitch to convey lexical or grammatical distinctions. We hypothesize that there is a relationship between the population frequency of these two alleles and the presence of linguistic tone and test this hypothesis relative to a large database (983 alleles and 26 linguistic features in 49 populations), showing that it is not due to the usual explanatory factors represented by geography and history. The relationship between genetic and linguistic diversity in this case may be causal: certain alleles can bias language acquisition or processing and thereby influence the trajectory of language change through iterated cultural transmission.
Collapse
Affiliation(s)
- Dan Dediu
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 14 Buccleuch Place, Edinburgh EH8 9LN, United Kingdom
| | - D. Robert Ladd
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, 14 Buccleuch Place, Edinburgh EH8 9LN, United Kingdom
| |
Collapse
|
130
|
Perrachione TK, Wong PCM. Learning to recognize speakers of a non-native language: Implications for the functional organization of human auditory cortex. Neuropsychologia 2007; 45:1899-910. [PMID: 17258240 DOI: 10.1016/j.neuropsychologia.2006.11.015] [Citation(s) in RCA: 72] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2006] [Revised: 11/27/2006] [Accepted: 11/28/2006] [Indexed: 11/22/2022]
Abstract
Brain imaging studies of voice perception often contrast activation from vocal and verbal tasks to identify regions uniquely involved in processing voice. However, such a strategy precludes detection of the functional relationship between speech and voice perception. In a pair of experiments involving identifying voices from native and foreign language speech we show that, even after repeated exposure to the same foreign language speakers, accurate talker identification is in a large part dependent on linguistic proficiency. These results suggest that a strong integration between the brain regions implicated in voice perception and speech perception accounts for the accurate identification of talkers.
Collapse
Affiliation(s)
- Tyler K Perrachione
- Department of Linguistics & Program in Cognitive Science, Northwestern University, Evanston, IL 60208, USA.
| | | |
Collapse
|