1
|
Chen W, van de Weijer J, Qian Q, Zhu S, Wang M. Tone and vowel disruptions in Mandarin aphasia and apraxia of speech. CLINICAL LINGUISTICS & PHONETICS 2022:1-24. [PMID: 35656744 DOI: 10.1080/02699206.2022.2081611] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/04/2021] [Revised: 05/18/2022] [Accepted: 05/19/2022] [Indexed: 06/15/2023]
Abstract
In this study, we investigated the lexical tones and vowels produced by ten speakers diagnosed with aphasia and coexisting apraxia of speech (A-AOS) and ten healthy participants, to compare their tone and vowel disruptions. We first judged the productions of both A-AOS and healthy participants and classified them into three categories, i.e. those by healthy speakers and rated as correct, those by A-AOS participants and rated as correct, and those by A-AOS participants and rated as incorrect. We then compared the perceptual results for the three groups based on their respective acoustic correlates to reveal the relations among different accuracy groups. Results showed that the numbers of tone and vowel disruptions by A-AOS speakers occurred on a comparable scale. In perception, approximately equal numbers of tones and vowels produced by A-AOS participants were identified as correct; however, acoustic parameters showed that, unlike vowels, the patients' tones categorised as correct by native Mandarin listeners differed considerably from those of the healthy speakers, suggesting that for Mandarin A-AOS patients, tones were in fact more disrupted than vowels in acoustic terms. Native Mandarin listeners seemed to be more tolerant of less well-targeted tones than less-well targeted vowels. The clinical implication is that tonal and segmental practice should be incorporated for Mandarin A-AOS patients to enhance their overall motor speech control.
Collapse
Affiliation(s)
- Wenjun Chen
- School of Foreign Languages, Ningbo University of Technology, Ningbo, Zhejiang, China
| | | | - Qian Qian
- Speech and Language Therapy Department, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), School of Medicine, Tongji University, Shanghai, China
| | - Shuangshuang Zhu
- Speech and Language Therapy Department, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), School of Medicine, Tongji University, Shanghai, China
| | - Manna Wang
- Speech and Language Therapy Department, Shanghai YangZhi Rehabilitation Hospital (Shanghai Sunshine Rehabilitation Center), School of Medicine, Tongji University, Shanghai, China
| |
Collapse
|
2
|
Yuan D, Tian H, Zhou Y, Wu J, Sun T, Xiao Z, Shang C, Wang J, Chen X, Sun Y, Tang J, Qiu S, Tan LH. Acupoint-brain (acubrain) mapping: Common and distinct cortical language regions activated by focused ultrasound stimulation on two language-relevant acupoints. BRAIN AND LANGUAGE 2021; 215:104920. [PMID: 33561785 DOI: 10.1016/j.bandl.2021.104920] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2020] [Revised: 01/13/2021] [Accepted: 01/14/2021] [Indexed: 06/12/2023]
Abstract
Acupuncture, taking the advantage of modality-specific neural pathways, has shown promising results in the treatment of brain disorders that affect different modalities such as pain and vision. However, the precise underlying mechanisms of within-modality neuromodulation of acupoints on human high-order cognition remain largely unknown. In the present study, we used a non-invasive and easy-operating method, focused ultrasound, to stimulate two language-relevant acupoints, namely GB39 (Xuanzhong) and SJ8 (Sanyangluo), of thirty healthy adults. The effect of focused ultrasound stimulation (FUS) on brain activation was examined by functional magnetic resonance imaging (fMRI). We found that stimulating GB39 and SJ8 by FUS evoked overlapping but distinct brain activation patterns. Our findings provide a major step toward within-modality (in this case, language) acupoint-brain (acubrain) mapping and shed light on to the potential use of FUS as a personalized treatment option for brain disorders that affect high-level cognitive functions.
Collapse
Affiliation(s)
- Di Yuan
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Haoyue Tian
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Yulong Zhou
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Jinjian Wu
- The First School of Clinical Medicine, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Tong Sun
- School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Zhuoni Xiao
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Chunfeng Shang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Jiaojian Wang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Xin Chen
- School of Biomedical Engineering, Shenzhen University Health Science Center, Shenzhen, China
| | - Yimin Sun
- Department of Biomedical Engineering, Medical Systems Biology Research Center, Tsinghua University School of Medicine, Beijing, China
| | - Joey Tang
- Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China
| | - Shijun Qiu
- Department of Radiology, First Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China.
| | - Li Hai Tan
- Guangdong-Hongkong-Macau Institute of CNS Regeneration and Ministry of Education CNS Regeneration Collaborative Joint Laboratory, Jinan University, Guangzhou, China; Center for Language and Brain, Shenzhen Institute of Neuroscience, Shenzhen, China.
| |
Collapse
|
3
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
4
|
Vanier C, Pandey T, Parikh S, Rodriguez A, Knoblauch T, Peralta J, Hertzler A, Ma L, Nam R, Musallam S, Taylor H, Vickery T, Zhang Y, Ranzenberger L, Nguyen A, Kapostasy M, Asturias A, Fazzini E, Snyder T. Interval-censored survival analysis of mild traumatic brain injury with outcome based neuroimaging clinical applications. JOURNAL OF CONCUSSION 2020. [DOI: 10.1177/2059700220947194] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
Objective The purpose of this study was to assess the relationship between MRI findings and clinical presentation and outcomes in patients following mild traumatic brain injury (mTBI). We hypothesize that imaging findings other than hemorrhages and contusions may be used to predict symptom presentation and longevity following mTBI. Methods Patients (n = 250) diagnosed with mTBI and in litigation for brain injury underwent 3T magnetic resonance imaging (MRI). A retrospective chart review was performed to assess symptom presentation and improvement/resolution. To account for variable times of clinical presentation, nonuniform follow-up, and uncertainty in the dates of symptom resolution, a right censored, interval censored statistical analysis was performed. Incidence and resolution of headache, balance, cognitive deficit, fatigue, anxiety, depression, and emotional lability were compared among patients. Image findings analyzed included white matter hyperintensities (WMH), Diffusion Tensor Imaging (DTI) fractional anisotropy (FA) values, MR perfusion, auditory functional MRI (fMRI) activation, hippocampal atrophy (HA) and hippocampal asymmetry as defined by NeuroQuant ® volumetric software. Results Patients who reported LOC were significantly more likely to present with balance problems (p < 0.001), cognitive deficits (p = 0.010), fatigue (p = 0.025), depression (p = 0.002), and emotional lability (p = 0.002). Patients with LOC also demonstrated significantly slower recovery of cognitive function than those who did not lose consciousness (p = 0.044). Patients over the age of 40 had significantly higher odds of presenting with balance problems (p = 0.006). Additionally, these older patients were slower to recover cognitive function (p = 0.001) and less likely to experience improvement of headaches (p = 0.007). Abnormal MRI did not correlate significantly with symptom presentation, but was a strong indicator of symptom progression, with slower recovery of balance (p = 0.009) and cognitive deficits (p < 0.001). Conclusion This analysis demonstrates the utility of clinical data analysis using interval-censored survival statistical technique in head trauma patients. Strong statistical associations between neuroimaging findings and aggregate clinical outcomes were identified in patients with mTBI.
Collapse
Affiliation(s)
- Cheryl Vanier
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Trisha Pandey
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Shaunaq Parikh
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
- IMGEN LLC., Las Vegas, NV, USA
- Department of Family Medicine, University of Pittsburgh Medical Center Pinnacle, Harrisburg, PA, USA
| | | | | | - John Peralta
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Amanda Hertzler
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Leon Ma
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Ruslan Nam
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Sami Musallam
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Hallie Taylor
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Taylor Vickery
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Yolanda Zhang
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Logan Ranzenberger
- Department of Radiology, Michigan State University, East Lansing, MI, USA
- Department of Radiology, McClaren Health Care, Flint, MI, USA
| | - Andrew Nguyen
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Mike Kapostasy
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
- IMGEN LLC., Las Vegas, NV, USA
| | - Alex Asturias
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Enrico Fazzini
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
| | - Travis Snyder
- Department of Research, Touro University Nevada, Las Vegas, NV, USA
- IMGEN LLC., Las Vegas, NV, USA
- SimonMed Imaging, Las Vegas, NV, USA
| |
Collapse
|
5
|
Disentangling phonological and articulatory processing: A neuroanatomical study in aphasia. Neuropsychologia 2018; 121:175-185. [DOI: 10.1016/j.neuropsychologia.2018.10.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2018] [Revised: 10/15/2018] [Accepted: 10/16/2018] [Indexed: 11/24/2022]
|
6
|
Tse CY, Yip LY, Lui TKY, Xiao XZ, Wang Y, Chu WCW, Parks NA, Chan SSM, Neggers SFW. Establishing the functional connectivity of the frontotemporal network in pre-attentive change detection with Transcranial Magnetic Stimulation and event-related optical signal. Neuroimage 2018; 179:403-413. [DOI: 10.1016/j.neuroimage.2018.06.053] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2018] [Revised: 06/05/2018] [Accepted: 06/17/2018] [Indexed: 11/16/2022] Open
|
7
|
Battistella G, Kumar V, Simonyan K. Connectivity profiles of the insular network for speech control in healthy individuals and patients with spasmodic dysphonia. Brain Struct Funct 2018. [PMID: 29520481 DOI: 10.1007/s00429-018-1644-y] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
Abstract
The importance of insula in speech control is acknowledged but poorly understood, partly due to a variety of clinical symptoms resulting from insults to this structure. To clarify its structural organization within the speech network in healthy subjects, we used probabilistic diffusion tractography to examine insular connectivity with three cortical regions responsible for sound processing [Brodmann area (BA) 22], motor preparation (BA 44) and motor execution (laryngeal/orofacial primary motor cortex, BA 4). To assess insular reorganization in a speech disorder, we examined its structural connectivity in patients with spasmodic dysphonia (SD), a neurological condition that selectively affects speech production. We demonstrated structural segregation of insula into three non-overlapping regions, which receive distinct connections from BA 44 (anterior insula), BA 4 (mid-insula) and BA 22 (dorsal and posterior insula). There were no significant differences either in the number of streamlines connecting each insular subdivision to the cortical target or hemispheric lateralization of insular clusters and their projections between healthy subjects and SD patients. However, spatial distribution of the insular subdivisions connected to BA 4 and BA 44 was distinctly organized in healthy controls and SD patients, extending ventro-posteriorly in the former group and anterio-dorsally in the latter group. Our findings point to structural segregation of the insular sub-regions, which may be associated with the different aspects of sensorimotor and cognitive control of speech production. We suggest that distinct insular involvement may lead to different clinical manifestations when one or the other insular region and/or its connections undergo spatial reorganization.
Collapse
Affiliation(s)
- Giovanni Battistella
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Veena Kumar
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Kristina Simonyan
- Department of Otolaryngology, Massachusetts Eye and Ear Infirmary, Harvard Medical School, 243 Charles Street, Suite 421, Boston, MA, 02114, USA.
| |
Collapse
|
8
|
Neumann Y, Epstein B, Shafer VL. Electrophysiological indices of brain activity to content and function words in discourse. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2016; 51:546-555. [PMID: 26992119 DOI: 10.1111/1460-6984.12230] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2014] [Accepted: 11/20/2015] [Indexed: 06/05/2023]
Abstract
BACKGROUND An increase in positivity of event-related potentials (ERPs) at the lateral anterior sites has been hypothesized to be an index of semantic and discourse processing, with the right lateral anterior positivity (LAP) showing particular sensitivity to discourse factors. However, the research investigating the LAP is limited; it is unclear whether the effect is driven by word class (function word versus content word) or by a more general process of structure building triggered by elements of a determiner phrase (DP). AIMS To examine the neurophysiological indices of semantic/discourse integration using two different word categories (function versus content word) in the discourse contexts and to contrast processing of these word categories in meaningful versus nonsense contexts. METHODS & PROCEDURES Planned comparisons of ERPs time locked to a function word stimulus 'the' and a content word stimulus 'cats' in sentence-initial position were conducted in both discourse and nonsense contexts to examine the time course of processing following these word forms. OUTCOMES & RESULTS A repeated-measures analysis of variance (ANOVA) for the Discourse context revealed a significant interaction of condition and site due to greater positivity for 'the' relative to 'cats' at anterior and superior sites. In the Nonsense context, there was a significant interaction of condition, time and site due to greater positivity for 'the' relative to 'cats' at anterior sites from 150 to 350 ms post-stimulus offset and at superior sites from 150 to 200 ms post-stimulus offset. Overall, greater positivity for both 'the' and 'cats' was observed in the discourse relative to the nonsense context beginning approximately 150 ms post-stimulus offset. Additionally, topographical analyses were highly correlated for the two word categories when processing meaningful discourse. This topographical pattern could be characterized as a prominent right LAP. The LAP was attenuated when the target stimulus word initiated a nonsense context. CONCLUSIONS & IMPLICATIONS The results of this study support the view that the right LAP is an index of general discourse processing rather than an index of word class. These findings demonstrate that the LAP can be used to study discourse processing in populations with compromised metalinguistic skills, such as adults with aphasia or traumatic brain injury.
Collapse
Affiliation(s)
- Yael Neumann
- Queens College, City University of New York, Queens, NY, USA
| | - Baila Epstein
- Brooklyn College, City University of New York, Brooklyn, NY, USA
| | - Valerie L Shafer
- The Graduate Center, City University of New York, New York, NY, USA
| |
Collapse
|
9
|
Poliva O. From Mimicry to Language: A Neuroanatomically Based Evolutionary Model of the Emergence of Vocal Language. Front Neurosci 2016; 10:307. [PMID: 27445676 PMCID: PMC4928493 DOI: 10.3389/fnins.2016.00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 06/17/2016] [Indexed: 11/24/2022] Open
Abstract
The auditory cortex communicates with the frontal lobe via the middle temporal gyrus (auditory ventral stream; AVS) or the inferior parietal lobule (auditory dorsal stream; ADS). Whereas the AVS is ascribed only with sound recognition, the ADS is ascribed with sound localization, voice detection, prosodic perception/production, lip-speech integration, phoneme discrimination, articulation, repetition, phonological long-term memory and working memory. Previously, I interpreted the juxtaposition of sound localization, voice detection, audio-visual integration and prosodic analysis, as evidence that the behavioral precursor to human speech is the exchange of contact calls in non-human primates. Herein, I interpret the remaining ADS functions as evidence of additional stages in language evolution. According to this model, the role of the ADS in vocal control enabled early Homo (Hominans) to name objects using monosyllabic calls, and allowed children to learn their parents' calls by imitating their lip movements. Initially, the calls were forgotten quickly but gradually were remembered for longer periods. Once the representations of the calls became permanent, mimicry was limited to infancy, and older individuals encoded in the ADS a lexicon for the names of objects (phonological lexicon). Consequently, sound recognition in the AVS was sufficient for activating the phonological representations in the ADS and mimicry became independent of lip-reading. Later, by developing inhibitory connections between acoustic-syllabic representations in the AVS and phonological representations of subsequent syllables in the ADS, Hominans became capable of concatenating the monosyllabic calls for repeating polysyllabic words (i.e., developed working memory). Finally, due to strengthening of connections between phonological representations in the ADS, Hominans became capable of encoding several syllables as a single representation (chunking). Consequently, Hominans began vocalizing and mimicking/rehearsing lists of words (sentences).
Collapse
|
10
|
Humphries C, Sabri M, Lewis K, Liebenthal E. Hierarchical organization of speech perception in human auditory cortex. Front Neurosci 2014; 8:406. [PMID: 25565939 PMCID: PMC4263085 DOI: 10.3389/fnins.2014.00406] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2014] [Accepted: 11/22/2014] [Indexed: 11/22/2022] Open
Abstract
Human speech consists of a variety of articulated sounds that vary dynamically in spectral composition. We investigated the neural activity associated with the perception of two types of speech segments: (a) the period of rapid spectral transition occurring at the beginning of a stop-consonant vowel (CV) syllable and (b) the subsequent spectral steady-state period occurring during the vowel segment of the syllable. Functional magnetic resonance imaging (fMRI) was recorded while subjects listened to series of synthesized CV syllables and non-phonemic control sounds. Adaptation to specific sound features was measured by varying either the transition or steady-state periods of the synthesized sounds. Two spatially distinct brain areas in the superior temporal cortex were found that were sensitive to either the type of adaptation or the type of stimulus. In a relatively large section of the bilateral dorsal superior temporal gyrus (STG), activity varied as a function of adaptation type regardless of whether the stimuli were phonemic or non-phonemic. Immediately adjacent to this region in a more limited area of the ventral STG, increased activity was observed for phonemic trials compared to non-phonemic trials, however, no adaptation effects were found. In addition, a third area in the bilateral medial superior temporal plane showed increased activity to non-phonemic compared to phonemic sounds. The results suggest a multi-stage hierarchical stream for speech sound processing extending ventrolaterally from the superior temporal plane to the superior temporal sulcus. At successive stages in this hierarchy, neurons code for increasingly more complex spectrotemporal features. At the same time, these representations become more abstracted from the original acoustic form of the sound.
Collapse
Affiliation(s)
- Colin Humphries
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA
| | - Merav Sabri
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA
| | - Kimberly Lewis
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA
| | - Einat Liebenthal
- Department of Neurology, Medical College of Wisconsin Milwaukee, WI, USA ; Department of Psychiatry, Brigham and Women's Hospital Boston, MA, USA
| |
Collapse
|
11
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
12
|
Teki S, Barnes GR, Penny WD, Iverson P, Woodhead ZVJ, Griffiths TD, Leff AP. The right hemisphere supports but does not replace left hemisphere auditory function in patients with persisting aphasia. ACTA ACUST UNITED AC 2013; 136:1901-12. [PMID: 23715097 DOI: 10.1093/brain/awt087] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
In this study, we used magnetoencephalography and a mismatch paradigm to investigate speech processing in stroke patients with auditory comprehension deficits and age-matched control subjects. We probed connectivity within and between the two temporal lobes in response to phonemic (different word) and acoustic (same word) oddballs using dynamic causal modelling. We found stronger modulation of self-connections as a function of phonemic differences for control subjects versus aphasics in left primary auditory cortex and bilateral superior temporal gyrus. The patients showed stronger modulation of connections from right primary auditory cortex to right superior temporal gyrus (feed-forward) and from left primary auditory cortex to right primary auditory cortex (interhemispheric). This differential connectivity can be explained on the basis of a predictive coding theory which suggests increased prediction error and decreased sensitivity to phonemic boundaries in the aphasics' speech network in both hemispheres. Within the aphasics, we also found behavioural correlates with connection strengths: a negative correlation between phonemic perception and an inter-hemispheric connection (left superior temporal gyrus to right superior temporal gyrus), and positive correlation between semantic performance and a feedback connection (right superior temporal gyrus to right primary auditory cortex). Our results suggest that aphasics with impaired speech comprehension have less veridical speech representations in both temporal lobes, and rely more on the right hemisphere auditory regions, particularly right superior temporal gyrus, for processing speech. Despite this presumed compensatory shift in network connectivity, the patients remain significantly impaired.
Collapse
Affiliation(s)
- Sundeep Teki
- Wellcome Trust Centre for Neuroimaging, University College London, London WC1N 3BG, UK.
| | | | | | | | | | | | | |
Collapse
|
13
|
Abstract
During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.
Collapse
|
14
|
Abstract
Human hearing is constructive. For example, when a voice is partially replaced by an extraneous sound (e.g., on the telephone due to a transmission problem), the auditory system may restore the missing portion so that the voice can be perceived as continuous (Miller and Licklider, 1950; for review, see Bregman, 1990; Warren, 1999). The neural mechanisms underlying this continuity illusion have been studied mostly with schematic stimuli (e.g., simple tones) and are still a matter of debate (for review, see Petkov and Sutter, 2011). The goal of the present study was to elucidate how these mechanisms operate under more natural conditions. Using psychophysics and electroencephalography (EEG), we assessed simultaneously the perceived continuity of a human vowel sound through interrupting noise and the concurrent neural activity. We found that vowel continuity illusions were accompanied by a suppression of the 4 Hz EEG power in auditory cortex (AC) that was evoked by the vowel interruption. This suppression was stronger than the suppression accompanying continuity illusions of a simple tone. Finally, continuity perception and 4 Hz power depended on the intactness of the sound that preceded the vowel (i.e., the auditory context). These findings show that a natural sound may be restored during noise due to the suppression of 4 Hz AC activity evoked early during the noise. This mechanism may attenuate sudden pitch changes, adapt the resistance of the auditory system to extraneous sounds across auditory scenes, and provide a useful model for assisted hearing devices.
Collapse
|
15
|
Hailstone JC, Ridgway GR, Bartlett JW, Goll JC, Crutch SJ, Warren JD. Accent processing in dementia. Neuropsychologia 2012; 50:2233-44. [PMID: 22664324 PMCID: PMC3484399 DOI: 10.1016/j.neuropsychologia.2012.05.027] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2011] [Revised: 05/10/2012] [Accepted: 05/24/2012] [Indexed: 11/27/2022]
Abstract
Accented speech conveys important nonverbal information about the speaker as well as presenting the brain with the problem of decoding a non-canonical auditory signal. The processing of non-native accents has seldom been studied in neurodegenerative disease and its brain basis remains poorly understood. Here we investigated the processing of non-native international and regional accents of English in cohorts of patients with Alzheimer's disease (AD; n=20) and progressive nonfluent aphasia (PNFA; n=6) in relation to healthy older control subjects (n=35). A novel battery was designed to assess accent comprehension and recognition and all subjects had a general neuropsychological assessment. Neuroanatomical associations of accent processing performance were assessed using voxel-based morphometry on MR brain images within the larger AD group. Compared with healthy controls, both the AD and PNFA groups showed deficits of non-native accent recognition and the PNFA group showed reduced comprehension of words spoken in international accents compared with a Southern English accent. At individual subject level deficits were observed more consistently in the PNFA group, and the disease groups showed different patterns of accent comprehension impairment (generally more marked for sentences in AD and for single words in PNFA). Within the AD group, grey matter associations of accent comprehension and recognition were identified in the anterior superior temporal lobe. The findings suggest that accent processing deficits may constitute signatures of neurodegenerative disease with potentially broader implications for understanding how these diseases affect vocal communication under challenging listening conditions.
Collapse
Affiliation(s)
- Julia C. Hailstone
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Gerard R. Ridgway
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
- Wellcome Trust Centre for Neuroimaging, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Jonathan W. Bartlett
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
- Department of Medical Statistics, London School of Hygiene & Tropical Medicine, London, UK
| | - Johanna C. Goll
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Sebastian J. Crutch
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
| | - Jason D. Warren
- Dementia Research Centre, UCL Institute of Neurology, Queen Square, London WC1N 3BG, UK
| |
Collapse
|
16
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1284] [Impact Index Per Article: 107.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
17
|
Lidzba K, Schwilling E, Grodd W, Krägeloh-Mann I, Wilke M. Language comprehension vs. language production: age effects on fMRI activation. BRAIN AND LANGUAGE 2011; 119:6-15. [PMID: 21450336 DOI: 10.1016/j.bandl.2011.02.003] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2010] [Revised: 02/09/2011] [Accepted: 02/12/2011] [Indexed: 05/26/2023]
Abstract
Normal language acquisition is a process that unfolds with amazing speed primarily in the first years of life. However, the refinement of linguistic proficiency is an ongoing process, extending well into childhood and adolescence. An increase in lateralization and a more focussed productive language network have been suggested to be the neural correlates of this process. However, the processes underlying the refinement of language comprehension are less clear. Using a language comprehension (Beep Stories) and a language production (Vowel Identification) task in fMRI, we studied language representation and lateralization in 36 children, adolescents, and young adults (age 6-24 years). For the language comprehension network, we found a more focal activation with age in the bilateral superior temporal gyri. No significant increase of lateralization with age could be observed, so the neural basis of language comprehension as assessed with the Beep Stories task seems to be established in a bilateral network by late childhood. For the productive network, however, we could confirm an increase with age both in focus and lateralization. Only in the language comprehension task did verbal IQ correlate with lateralization, with higher verbal IQ being associated with more right-hemispheric involvement. In some subjects (24%), language comprehension and language production were lateralized to opposite hemispheres.
Collapse
Affiliation(s)
- Karen Lidzba
- Pediatric Neurology & Developmental Medicine, Tübingen, Germany.
| | | | | | | | | |
Collapse
|
18
|
|
19
|
Parker Jones O, Green DW, Grogan A, Pliatsikas C, Filippopolitis K, Ali N, Lee HL, Ramsden S, Gazarian K, Prejawa S, Seghier ML, Price CJ. Where, when and why brain activation differs for bilinguals and monolinguals during picture naming and reading aloud. Cereb Cortex 2011; 22:892-902. [PMID: 21705392 PMCID: PMC3306575 DOI: 10.1093/cercor/bhr161] [Citation(s) in RCA: 108] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.
Collapse
Affiliation(s)
- Oiwi Parker Jones
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, University College London, London WC1N 3BG, UK.
| | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
20
|
Woods DL, Herron TJ, Cate AD, Kang X, Yund EW. Phonological processing in human auditory cortical fields. Front Hum Neurosci 2011; 5:42. [PMID: 21541252 PMCID: PMC3082852 DOI: 10.3389/fnhum.2011.00042] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2010] [Accepted: 04/01/2011] [Indexed: 11/30/2022] Open
Abstract
We used population-based cortical-surface analysis of functional magnetic imaging data to characterize the processing of consonant–vowel–consonant syllables (CVCs) and spectrally matched amplitude-modulated noise bursts (AMNBs) in human auditory cortex as subjects attended to auditory or visual stimuli in an intermodal selective attention paradigm. Average auditory cortical field (ACF) locations were defined using tonotopic mapping in a previous study. Activations in auditory cortex were defined by two stimulus-preference gradients: (1) Medial belt ACFs preferred AMNBs and lateral belt and parabelt fields preferred CVCs. This preference extended into core ACFs with medial regions of primary auditory cortex (A1) and the rostral field preferring AMNBs and lateral regions preferring CVCs. (2) Anterior ACFs showed smaller activations but more clearly defined stimulus preferences than did posterior ACFs. Stimulus preference gradients were unaffected by auditory attention suggesting that ACF preferences reflect the automatic processing of different spectrotemporal sound features.
Collapse
Affiliation(s)
- David L Woods
- Human Cognitive Neurophysiology Laboratory, Department of Veterans Affairs Northern California Health Care System Martinez, CA, USA
| | | | | | | | | |
Collapse
|
21
|
Foley JA, Della Sala S. Do shorter Cortex papers have greater impact? Cortex 2011; 47:635-42. [PMID: 21463860 DOI: 10.1016/j.cortex.2011.03.008] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2011] [Accepted: 03/18/2011] [Indexed: 01/02/2023]
|
22
|
Abstract
Behavioral studies have demonstrated that learning to read and write affects the processing of spoken language. The present study investigates the neural mechanism underlying the emergence of such orthographic effects during speech processing. Transcranial magnetic stimulation (TMS) was used to tease apart two competing hypotheses that consider this orthographic influence to be either a consequence of a change in the nature of the phonological representations during literacy acquisition or a consequence of online coactivation of the orthographic and phonological representations during speech processing. Participants performed an auditory lexical decision task in which the orthographic consistency of spoken words was manipulated and repetitive TMS was used to interfere with either phonological or orthographic processing by stimulating left supramarginal gyrus (SMG) or left ventral occipitotemporal cortex (vOTC), respectively. The advantage for consistently spelled words was removed only when the stimulation was delivered to SMG and not to vOTC, providing strong evidence that this effect arises at a phonological, rather than an orthographic, level. We propose a possible mechanistic explanation for the role of SMG in phonological processing and how this is affected by learning to read.
Collapse
|
23
|
Jääskeläinen IP. The role of speech production system in audiovisual speech perception. Open Neuroimag J 2010; 4:30-6. [PMID: 20922046 PMCID: PMC2948144 DOI: 10.2174/1874440001004020030] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2009] [Revised: 09/28/2009] [Accepted: 09/30/2009] [Indexed: 11/22/2022] Open
Abstract
Seeing the articulatory gestures of the speaker significantly enhances speech perception. Findings from recent neuroimaging studies suggest that activation of the speech motor system during lipreading enhance speech perception by tuning, in a top-down fashion, speech-sound processing in the superior aspects of the posterior temporal lobe. Anatomically, the superior-posterior temporal lobe areas receive connections from the auditory, visual, and speech motor cortical areas. Thus, it is possible that neuronal receptive fields are shaped during development to respond to speech-sound features that coincide with visual and motor speech cues, in contrast with the anterior/lateral temporal lobe areas that might process speech sounds predominantly based on acoustic cues. The superior-posterior temporal lobe areas have also been consistently associated with auditory spatial processing. Thus, the involvement of these areas in audiovisual speech perception might partly be explained by the spatial processing requirements when associating sounds, seen articulations, and one's own motor movements. Tentatively, it is possible that the anterior "what" and posterior "where / how" auditory cortical processing pathways are parts of an interacting network, the instantaneous state of which determines what one ultimately perceives, as potentially reflected in the dynamics of oscillatory activity.
Collapse
Affiliation(s)
- Iiro P Jääskeläinen
- Department of Biomedical Engineering and Computational Science, Aalto University, Espoo, Finland
| |
Collapse
|
24
|
Cortical representation of natural complex sounds: effects of acoustic features and auditory object category. J Neurosci 2010; 30:7604-12. [PMID: 20519535 DOI: 10.1523/jneurosci.0296-10.2010] [Citation(s) in RCA: 240] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
How the brain processes complex sounds, like voices or musical instrument sounds, is currently not well understood. The features comprising the acoustic profiles of such sounds are thought to be represented by neurons responding to increasing degrees of complexity throughout auditory cortex, with complete auditory "objects" encoded by neurons (or small networks of neurons) in anterior superior temporal regions. Although specialized voice and speech-sound regions have been proposed, it is unclear how other types of complex natural sounds are processed within this object-processing pathway. Using functional magnetic resonance imaging, we sought to demonstrate spatially distinct patterns of category-selective activity in human auditory cortex, independent of semantic content and low-level acoustic features. Category-selective responses were identified in anterior superior temporal regions, consisting of clusters selective for musical instrument sounds and for human speech. An additional subregion was identified that was particularly selective for the acoustic-phonetic content of speech. In contrast, regions along the superior temporal plane closer to primary auditory cortex were not selective for stimulus category, responding instead to specific acoustic features embedded in natural sounds, such as spectral structure and temporal modulation. Our results support a hierarchical organization of the anteroventral auditory-processing stream, with the most anterior regions representing the complete acoustic signature of auditory objects.
Collapse
|
25
|
|
26
|
Abstract
In this review of 100 fMRI studies of speech comprehension and production, published in 2009, activation is reported for: prelexical speech perception in bilateral superior temporal gyri; meaningful speech in middle and inferior temporal cortex; semantic retrieval in the left angular gyrus and pars orbitalis; and sentence comprehension in bilateral superior temporal sulci. For incomprehensible sentences, activation increases in four inferior frontal regions, posterior planum temporale, and ventral supramarginal gyrus. These effects are associated with the use of prior knowledge of semantic associations, word sequences, and articulation that predict the content of the sentence. Speech production activates the same set of regions as speech comprehension but in addition, activation is reported for: word retrieval in left middle frontal cortex; articulatory planning in the left anterior insula; the initiation and execution of speech in left putamen, pre-SMA, SMA, and motor cortex; and for suppressing unintended responses in the anterior cingulate and bilateral head of caudate nuclei. Anatomical and functional connectivity studies are now required to identify the processing pathways that integrate these areas to support language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, Institute of Neurology, UCL, London, UK.
| |
Collapse
|
27
|
Blackmon K, Barr WB, Kuzniecky R, Dubois J, Carlson C, Quinn BT, Blumberg M, Halgren E, Hagler DJ, Mikhly M, Devinsky O, McDonald CR, Dale AM, Thesen T. Phonetically irregular word pronunciation and cortical thickness in the adult brain. Neuroimage 2010; 51:1453-8. [PMID: 20302944 DOI: 10.1016/j.neuroimage.2010.03.028] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2010] [Revised: 02/24/2010] [Accepted: 03/09/2010] [Indexed: 01/18/2023] Open
Abstract
Accurate pronunciation of phonetically irregular words (exception words) requires prior exposure to unique relationships between orthographic and phonemic features. Whether such word knowledge is accompanied by structural variation in areas associated with orthographic-to-phonemic transformations has not been investigated. We used high-resolution MRI to determine whether performance on a visual word-reading test composed of phonetically irregular words, the Wechsler Test of Adult Reading (WTAR), is associated with regional variations in cortical structure. A sample of 60 right-handed, neurologically intact individuals were administered the WTAR and underwent 3T volumetric MRI. Using quantitative, surface-based image analysis, cortical thickness was estimated at each vertex on the cortical mantle and correlated with WTAR scores while controlling for age. Higher scores on the WTAR were associated with thicker cortex in bilateral anterior superior temporal gyrus, bilateral angular gyrus/posterior superior temporal gyrus, and left hemisphere intraparietal sulcus. Higher scores were also associated with thinner cortex in left hemisphere posterior fusiform gyrus and central sulcus, bilateral inferior frontal gyrus, and right hemisphere lingual gyrus and supramarginal gyrus. These results suggest that the ability to correctly pronounce phonetically irregular words is associated with structural variations in cortical areas that are commonly activated in functional neuroimaging studies of word reading, including areas associated with grapheme-to-phonemic conversion.
Collapse
Affiliation(s)
- Karen Blackmon
- Comprehensive Epilepsy Center, Department of Neurology, New York University, New York, NY 10016,USA
| | | | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
28
|
Foley JA, Della Sala S. Geographical distribution of Cortex publications. Cortex 2010; 46:410-9. [DOI: 10.1016/j.cortex.2009.11.010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2009] [Accepted: 11/23/2009] [Indexed: 01/05/2023]
|
29
|
Crinion JT, Green DW, Chung R, Ali N, Grogan A, Price GR, Mechelli A, Price CJ. Neuroanatomical markers of speaking Chinese. Hum Brain Mapp 2010; 30:4108-15. [PMID: 19530216 PMCID: PMC3261379 DOI: 10.1002/hbm.20832] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The aim of this study was to identify regional structural differences in the brains of native speakers of a tonal language (Chinese) compared to nontonal (European) language speakers. Our expectation was that there would be differences in regions implicated in pitch perception and production. We therefore compared structural brain images in three groups of participants: 31 who were native Chinese speakers; 7 who were native English speakers who had learnt Chinese in adulthood; and 21 European multilinguals who did not speak Chinese. The results identified two brain regions in the vicinity of the right anterior temporal lobe and the left insula where speakers of Chinese had significantly greater gray and white matter density compared with those who did not speak Chinese. Importantly, the effects were found in both native Chinese speakers and European subjects who learnt Chinese as a non‐native language, illustrating that they were language related and not ethnicity effects. On the basis of prior studies, we suggest that the locations of these gray and white matter changes in speakers of a tonal language are consistent with a role in linking the pitch of words to their meaning. Hum Brain Mapp, 2009. © 2009 Wiley‐Liss, Inc.
Collapse
Affiliation(s)
- Jenny T Crinion
- Wellcome Trust Centre for Neuroimaging, University College London, United Kingdom.
| | | | | | | | | | | | | | | |
Collapse
|