1
|
Dole M, Vilain C, Haldin C, Baciu M, Cousin E, Lamalle L, Lœvenbruck H, Vilain A, Schwartz JL. Comparing the selectivity of vowel representations in cortical auditory vs. motor areas: A repetition-suppression study. Neuropsychologia 2022; 176:108392. [DOI: 10.1016/j.neuropsychologia.2022.108392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 09/22/2022] [Accepted: 10/03/2022] [Indexed: 10/31/2022]
|
2
|
Preisig BC, Riecke L, Hervais-Adelman A. Speech sound categorization: The contribution of non-auditory and auditory cortical regions. Neuroimage 2022; 258:119375. [PMID: 35700949 DOI: 10.1016/j.neuroimage.2022.119375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 05/13/2022] [Accepted: 06/10/2022] [Indexed: 11/26/2022] Open
Abstract
Which processes in the human brain lead to the categorical perception of speech sounds? Investigation of this question is hampered by the fact that categorical speech perception is normally confounded by acoustic differences in the stimulus. By using ambiguous sounds, however, it is possible to dissociate acoustic from perceptual stimulus representations. Twenty-seven normally hearing individuals took part in an fMRI study in which they were presented with an ambiguous syllable (intermediate between /da/ and /ga/) in one ear and with disambiguating acoustic feature (third formant, F3) in the other ear. Multi-voxel pattern searchlight analysis was used to identify brain areas that consistently differentiated between response patterns associated with different syllable reports. By comparing responses to different stimuli with identical syllable reports and identical stimuli with different syllable reports, we disambiguated whether these regions primarily differentiated the acoustics of the stimuli or the syllable report. We found that BOLD activity patterns in left perisylvian regions (STG, SMG), left inferior frontal regions (vMC, IFG, AI), left supplementary motor cortex (SMA/pre-SMA), and right motor and somatosensory regions (M1/S1) represent listeners' syllable report irrespective of stimulus acoustics. Most of these regions are outside of what is traditionally regarded as auditory or phonological processing areas. Our results indicate that the process of speech sound categorization implicates decision-making mechanisms and auditory-motor transformations.
Collapse
Affiliation(s)
- Basil C Preisig
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands; Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands; Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland; Department of Comparative Language Science, Evolutionary Neuroscience of Language, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland.
| | - Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 ER Maastricht, The Netherlands
| | - Alexis Hervais-Adelman
- Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland
| |
Collapse
|
3
|
Xie Y, He Y, Guan M, Zhou G, Wang Z, Ma Z, Wang H, Yin H. Impact of low-frequency rTMS on functional connectivity of the dentate nucleus subdomains in schizophrenia patients with auditory verbal hallucination. J Psychiatr Res 2022; 149:87-96. [PMID: 35259665 DOI: 10.1016/j.jpsychires.2022.02.030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 02/07/2022] [Accepted: 02/28/2022] [Indexed: 01/10/2023]
Abstract
Despite low-frequency repetitive transcranial magnetic stimulation (rTMS) is effective in treating schizophrenia patients with auditory verbal hallucinations (AVH), the underlying neural mechanisms of the effect still need to be clarified. Using the cerebellar dentate nucleus (DN) subdomain (dorsal and versal DN) as seeds, the present study investigated resting state functional connectivity (FC) alternations of the seeds with the whole brain and their associations with clinical responses in schizophrenia patients with AVH receiving 1 Hz rTMS treatment. The results showed that the rTMS treatment improved the psychiatric symptoms (e.g., AVH and positive symptoms) and certain neurocognitive functions (e.g., visual learning and verbal learning) in the patients. In addition, the patients at baseline showed increased FC between the DN subdomains and temporal lobes (e.g., right superior temporal gyrus and right middle temporal gyrus) and decreased FC between the DN subdomains and the left superior frontal gyrus, right postcentral gyrus, left supramarginal gyrus and regional cerebellum (e.g., lobule 4-5) compared to controls. Furthermore, these abnormal DN subdomain connectivity patterns did not persist and decreased FC of DN subdomains with cerebellum lobule 4-5 were reversed in patients after rTMS treatment. Linear regression analysis showed that the FC difference values of DN subdomains with the temporal lobes, supramarginal gyrus and cerebellum 4-5 between the patients at baseline and posttreatment were associated with clinical improvements (e.g., AVH and verbal learning) after rTMS treatment. The results suggested that rTMS treatment may modulate the neural circuits of the DN subdomains and hint to underlying neural mechanisms for low-frequency rTMS treating schizophrenia with AVH.
Collapse
Affiliation(s)
- Yuanjun Xie
- School of Education, Xinyang College, Xinyang, China; Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| | - Ying He
- Department of Psychiatry, Second Affiliated Hospital, Army Medical University, Chongqing, China
| | - Muzhen Guan
- Department of Mental Health, Xi'an Medical University, Xi'an, China
| | | | - Zhongheng Wang
- Department of Psychiatry, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Zhujing Ma
- Department of Military Psychology, School of Psychology, Fourth Military Medical University, Xi'an, China
| | - Huaning Wang
- Department of Psychiatry, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| | - Hong Yin
- Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China.
| |
Collapse
|
4
|
Abstract
Human speech perception results from neural computations that transform external acoustic speech signals into internal representations of words. The superior temporal gyrus (STG) contains the nonprimary auditory cortex and is a critical locus for phonological processing. Here, we describe how speech sound representation in the STG relies on fundamentally nonlinear and dynamical processes, such as categorization, normalization, contextual restoration, and the extraction of temporal structure. A spatial mosaic of local cortical sites on the STG exhibits complex auditory encoding for distinct acoustic-phonetic and prosodic features. We propose that as a population ensemble, these distributed patterns of neural activity give rise to abstract, higher-order phonemic and syllabic representations that support speech perception. This review presents a multi-scale, recurrent model of phonological processing in the STG, highlighting the critical interface between auditory and language systems.
Collapse
Affiliation(s)
- Ilina Bhaya-Grossman
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
- Joint Graduate Program in Bioengineering, University of California, Berkeley and San Francisco, California 94720, USA
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, California 94143, USA;
| |
Collapse
|
5
|
Renvall H, Seol J, Tuominen R, Sorger B, Riecke L, Salmelin R. Selective auditory attention within naturalistic scenes modulates reactivity to speech sounds. Eur J Neurosci 2021; 54:7626-7641. [PMID: 34697833 PMCID: PMC9298413 DOI: 10.1111/ejn.15504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 10/10/2021] [Indexed: 11/27/2022]
Abstract
Rapid recognition and categorization of sounds are essential for humans and animals alike, both for understanding and reacting to our surroundings and for daily communication and social interaction. For humans, perception of speech sounds is of crucial importance. In real life, this task is complicated by the presence of a multitude of meaningful non‐speech sounds. The present behavioural, magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) study was set out to address how attention to speech versus attention to natural non‐speech sounds within complex auditory scenes influences cortical processing. The stimuli were superimpositions of spoken words and environmental sounds, with parametric variation of the speech‐to‐environmental sound intensity ratio. The participants' task was to detect a repetition in either the speech or the environmental sound. We found that specifically when participants attended to speech within the superimposed stimuli, higher speech‐to‐environmental sound ratios resulted in shorter sustained MEG responses and stronger BOLD fMRI signals especially in the left supratemporal auditory cortex and in improved behavioural performance. No such effects of speech‐to‐environmental sound ratio were observed when participants attended to the environmental sound part within the exact same stimuli. These findings suggest stronger saliency of speech compared with other meaningful sounds during processing of natural auditory scenes, likely linked to speech‐specific top‐down and bottom‐up mechanisms activated during speech perception that are needed for tracking speech in real‐life‐like auditory environments.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland.,BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, University of Helsinki and Aalto University School of Science, Helsinki, Finland
| | - Jaeho Seol
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Riku Tuominen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Bettina Sorger
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Lars Riecke
- Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland.,Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
6
|
De Letter M, Cocquyt EM, Cromheecke O, Criel Y, De Cock E, De Herdt V, Szmalec A, Duyck W. The Protective Influence of Bilingualism on the Recovery of Phonological Input Processing in Aphasia After Stroke. Front Psychol 2021; 11:553970. [PMID: 33479564 PMCID: PMC7814870 DOI: 10.3389/fpsyg.2020.553970] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Accepted: 11/09/2020] [Indexed: 11/13/2022] Open
Abstract
Language-related potentials are increasingly used to objectify (mal)adaptive neuroplasticity in stroke-related aphasia recovery. Using preattentive [mismatch negativity (MMN)] and attentive (P300) phonologically related paradigms, neuroplasticity in sensory memory and cognitive functioning underlying phonological processing can be investigated. In aphasic patients, MMN amplitudes are generally reduced for speech sounds with a topographic source distribution in the right hemisphere. For P300 amplitudes and latencies, both normal and abnormal results have been reported. The current study investigates the preattentive and attentive phonological discrimination ability in 17 aphasic patients (6 monolinguals and 11 bilinguals, aged 41–71 years) at two timepoints during aphasia recovery. Between the two timepoints, a significant improvement of behavioral language performance in both languages is observed in all patients with the MMN latency at timepoint 1 as a predictive factor for aphasia recovery. In contrast to monolinguals, bilingual aphasic patients have a higher probability to improve their processing speed during rehabilitation, resulting in a shortening of the MMN latency over time, which sometimes progresses toward the normative values.
Collapse
Affiliation(s)
- Miet De Letter
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | | | - Oona Cromheecke
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Yana Criel
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Elien De Cock
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Veerle De Herdt
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Arnaud Szmalec
- Psychological Sciences Research Institute, Université catholique de Louvain, Louvain-la-Neuve, Belgium.,Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Wouter Duyck
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
7
|
Whitten A, Key AP, Mefferd AS, Bodfish JW. Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children. BRAIN AND LANGUAGE 2020; 207:104825. [PMID: 32563764 DOI: 10.1016/j.bandl.2020.104825] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 06/11/2023]
Abstract
Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.
Collapse
Affiliation(s)
- Allison Whitten
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - Antje S Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - James W Bodfish
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA
| |
Collapse
|
8
|
Translating preclinical findings in clinically relevant new antipsychotic targets: focus on the glutamatergic postsynaptic density. Implications for treatment resistant schizophrenia. Neurosci Biobehav Rev 2019; 107:795-827. [DOI: 10.1016/j.neubiorev.2019.08.019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2019] [Revised: 07/20/2019] [Accepted: 08/22/2019] [Indexed: 02/07/2023]
|
9
|
Nucifora FC, Woznica E, Lee BJ, Cascella N, Sawa A. Treatment resistant schizophrenia: Clinical, biological, and therapeutic perspectives. Neurobiol Dis 2019; 131:104257. [PMID: 30170114 PMCID: PMC6395548 DOI: 10.1016/j.nbd.2018.08.016] [Citation(s) in RCA: 122] [Impact Index Per Article: 24.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 08/07/2018] [Accepted: 08/26/2018] [Indexed: 12/16/2022] Open
Abstract
Treatment resistant schizophrenia (TRS) refers to the significant proportion of schizophrenia patients who continue to have symptoms and poor outcomes despite treatment. While many definitions of TRS include failure of two different antipsychotics as a minimum criterion, the wide variability in inclusion criteria has challenged the consistency and reproducibility of results from studies of TRS. We begin by reviewing the clinical, neuroimaging, and neurobiological characteristics of TRS. We further review the current treatment strategies available, addressing clozapine, the first-line pharmacological agent for TRS, as well as pharmacological and non-pharmacological augmentation of clozapine including medication combinations, electroconvulsive therapy, repetitive transcranial magnetic stimulation, deep brain stimulation, and psychotherapies. We conclude by highlighting the most recent consensus for defining TRS proposed by the Treatment Response and Resistance in Psychosis Working Group, and provide our overview of future perspectives and directions that could help advance the field of TRS research, including the concept of TRS as a potential subtype of schizophrenia.
Collapse
Affiliation(s)
- Frederick C Nucifora
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins Hospital, 600 N. Wolfe St., Baltimore, MD 21287, USA.
| | - Edgar Woznica
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins Hospital, 600 N. Wolfe St., Baltimore, MD 21287, USA
| | - Brian J Lee
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins Hospital, 600 N. Wolfe St., Baltimore, MD 21287, USA
| | - Nicola Cascella
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins Hospital, 600 N. Wolfe St., Baltimore, MD 21287, USA
| | - Akira Sawa
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins Hospital, 600 N. Wolfe St., Baltimore, MD 21287, USA
| |
Collapse
|
10
|
Robson H, Griffiths TD, Grube M, Woollams AM. Auditory, Phonological, and Semantic Factors in the Recovery From Wernicke's Aphasia Poststroke: Predictive Value and Implications for Rehabilitation. Neurorehabil Neural Repair 2019; 33:800-812. [PMID: 31416400 DOI: 10.1177/1545968319868709] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Background. Understanding the factors that influence language recovery in aphasia is important for improving prognosis and treatment. Chronic comprehension impairments in Wernicke's aphasia (WA) are associated with impairments in auditory and phonological processing, compounded by semantic and executive difficulties. This study investigated whether the recovery of auditory, phonological, semantic, or executive factors underpins the recovery from WA comprehension impairments by charting changes in the neuropsychological profile from the subacute to the chronic phase. Method. This study used a prospective, longitudinal observational design. Twelve WA participants with superior temporal lobe lesions were recruited 2 months post-stroke onset (2 MPO). Language comprehension was measured alongside a neuropsychological profile of auditory, phonological, and semantic processing and phonological short-term memory and nonverbal reasoning at 3 poststroke time points: 2.5, 5, and 9 MPO. Results. Language comprehension displayed a strong and consistent recovery between 2.5 and 9 MPO. Improvements were also seen for slow auditory temporal processing, phonological short-term memory, and semantic processing but not for rapid auditory temporal, spectrotemporal, and phonological processing. Despite their lack of improvement, rapid auditory temporal processing at 2.5 MPO and phonological processing at 5 MPO predicated comprehension outcomes at 9 MPO. Conclusions. These results indicate that recovery of language comprehension in WA can be predicted from fixed auditory processing in the subacute stage. This suggests that speech comprehension recovery in WA results from reorganization of the remaining language comprehension network to enable the residual speech signal to be processed more efficiently, rather than partial recovery of underlying auditory, phonological, or semantic processing abilities.
Collapse
Affiliation(s)
| | | | - Manon Grube
- Newcastle University, Newcastle-upon-Tyne, UK.,Aarhus University, Denmark.,Technische Universität, Berlin, Germany
| | | |
Collapse
|
11
|
Saltuklaroglu T, Bowers A, Harkrider AW, Casenhiser D, Reilly KJ, Jenson DE, Thornton D. EEG mu rhythms: Rich sources of sensorimotor information in speech processing. BRAIN AND LANGUAGE 2018; 187:41-61. [PMID: 30509381 DOI: 10.1016/j.bandl.2018.09.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2017] [Revised: 09/27/2017] [Accepted: 09/23/2018] [Indexed: 06/09/2023]
Affiliation(s)
- Tim Saltuklaroglu
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA.
| | - Andrew Bowers
- University of Arkansas, Epley Center for Health Professions, 606 N. Razorback Road, Fayetteville, AR 72701, USA
| | - Ashley W Harkrider
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Devin Casenhiser
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - Kevin J Reilly
- Department of Audiology and Speech-Language Pathology, University of Tennessee Health Sciences, Knoxville, TN 37996, USA
| | - David E Jenson
- Department of Speech and Hearing Sciences, Elson S. Floyd College of Medicine, Spokane, WA 99210-1495, USA
| | - David Thornton
- Department of Hearing, Speech, and Language Sciences, Gallaudet University, 800 Florida Avenue NE, Washington, DC 20002, USA
| |
Collapse
|
12
|
Manes JL, Tjaden K, Parrish T, Simuni T, Roberts A, Greenlee JD, Corcos DM, Kurani AS. Altered resting-state functional connectivity of the putamen and internal globus pallidus is related to speech impairment in Parkinson's disease. Brain Behav 2018; 8:e01073. [PMID: 30047249 PMCID: PMC6160640 DOI: 10.1002/brb3.1073] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/01/2018] [Accepted: 06/19/2018] [Indexed: 01/20/2023] Open
Abstract
INTRODUCTION Speech impairment in Parkinson's disease (PD) is pervasive, with life-impacting consequences. Yet, little is known about how functional connections between the basal ganglia and cortex relate to PD speech impairment (PDSI). Whole-brain resting-state connectivity analyses of basal ganglia nuclei can expand the understanding of PDSI pathophysiology. METHODS Resting-state data from 89 right-handed subjects were downloaded from the Parkinson's Progression Markers Initiative database. Subjects included 12 older healthy controls ("OHC"), 42 PD patients without speech impairment ("PDN"), and 35 PD subjects with speech impairment ("PDSI"). Subjects were assigned to PDN and PDSI groups based on the Movement Disorders Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) Part III speech item scores ("0" vs. "1-4"). Whole-brain functional connectivity was calculated for four basal ganglia seeds in each hemisphere: putamen, caudate, external globus pallidus (GPe), and internal globus pallidus (GPi). For each seed region, group-averaged connectivity maps were compared among OHC, PDN, and PDSI groups using a multivariate ANCOVA controlling for the effects of age and sex. Subsequent planned pairwise t-tests were performed to determine differences between the three groups using a voxel-wise threshold of p < 0.001 and cluster-extent threshold of 272 mm3 (FWE<0.05). RESULTS In comparison with OHCs, both PDN and PDSI groups demonstrated significant differences in cortical connectivity with bilateral putamen, bilateral GPe, and right caudate. Compared to the PDN group, the PDSI subjects demonstrated significant differences in cortical connectivity with left putamen and left GPi. PDSI subjects had lower connectivity between the left putamen and left superior temporal gyrus compared to PDN. In addition, PDSI subjects had greater connectivity between left GPi and three cortical regions: left dorsal premotor/laryngeal motor cortex, left angular gyrus, and right angular gyrus. CONCLUSIONS The present findings suggest that speech impairment in PD is associated with altered cortical connectivity with left putamen and left GPi.
Collapse
Affiliation(s)
- Jordan L. Manes
- Department of Physical Therapy and Human Movement SciencesNorthwestern UniversityChicagoIllinois
| | - Kris Tjaden
- Department of Communication Disorders and SciencesUniversity at BuffaloBuffaloNew York
| | - Todd Parrish
- Department of RadiologyNorthwestern UniversityChicagoIllinois
| | - Tanya Simuni
- Ken and Ruth Davee Department of NeurologyNorthwestern UniversityChicagoIllinois
- The Parkinson's Disease and Movement Disorders ClinicNorthwestern UniversityChicagoIllinois
| | - Angela Roberts
- Roxelyn and Richard Pepper Department of Communication Sciences and DisordersNorthwestern UniversityEvanstonIllinois
| | | | - Daniel M. Corcos
- Department of Physical Therapy and Human Movement SciencesNorthwestern UniversityChicagoIllinois
| | - Ajay S. Kurani
- Department of RadiologyNorthwestern UniversityChicagoIllinois
| |
Collapse
|
13
|
Donohew L, DiBartolo M, Zhu X, Benca C, Lorch E, Noar SM, Kelly TH, Joseph JE. Communicating with Sensation Seekers: An fMRI Study of Neural Responses to Antidrug Public Service Announcements. HEALTH COMMUNICATION 2018; 33:1004-1012. [PMID: 28622027 PMCID: PMC6190582 DOI: 10.1080/10410236.2017.1331185] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
This study examined the neural basis of processing high- and low-message sensation value (MSV) antidrug public service announcements (PSAs) in high (HSS) and low sensation seekers (LSS) using fMRI. HSS more strongly engaged the salience network when processing PSAs (versus LSS), suggesting that high-MSV PSAs attracted their attention. HSS and LSS participants who engaged higher level cognitive processing regions reported that the PSAs were more convincing and believable and recalled the PSAs better immediately after testing. In contrast, HSS and LSS participants who strongly engaged visual attention regions for viewing PSAs reported lower personal relevance. These findings provide neurobiological evidence that high-MSV content is salient to HSS, a primary target group for antidrug messages, and additional cognitive processing is associated with higher perceived message effectiveness.
Collapse
Affiliation(s)
| | | | - Xun Zhu
- Department of Neurosciences, Medical University of South Carolina
| | - Chelsie Benca
- Department of Neurosciences, Medical University of South Carolina
| | | | - Seth M. Noar
- Department of Communication, University of Kentucky
| | | | - Jane E. Joseph
- Department of Neurosciences, Medical University of South Carolina
| |
Collapse
|
14
|
Abstract
Although the parietal lobe was considered by many of the earliest investigators of disordered language to be a major component of the neural systems instantiating language, most views of the anatomic substrate of language emphasize the role of temporal and frontal lobes in language processing. We review evidence from lesion studies as well as functional neuroimaging, demonstrating that the left parietal lobe is also crucial for several aspects of language. First, we argue that the parietal lobe plays a major role in semantic processing, particularly for "thematic" relationships in which information from multiple sensory and motor domains is integrated. Additionally, we review a number of accounts that emphasize the role of the left parietal lobe in phonologic processing. Although the accounts differ somewhat with respect to the nature of the linguistic computations subserved by the parietal lobe, they share the view that the parietal lobe is essential for the processes by which sound-based representations are transcoded into a format that can drive action systems. We suggest that investigations of the linguistic capacities of the parietal lobe constrained by the understanding of the parietal lobe in action and multimodal sensory integration may serve to enhance not only our understanding of language, but also the relationship between language and more basic brain functions.
Collapse
Affiliation(s)
- H Branch Coslett
- Department of Neurology, Perelman School of Medicine at the University of Pennsylvania, Philadelphia, PA, United States.
| | - Myrna F Schwartz
- Moss Rehabilitation Research Institute, Elkins Park, PA, United States
| |
Collapse
|
15
|
Tremblay P, Sato M, Deschamps I. Age differences in the motor control of speech: An fMRI study of healthy aging. Hum Brain Mapp 2017; 38:2751-2771. [PMID: 28263012 PMCID: PMC6866863 DOI: 10.1002/hbm.23558] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 01/27/2017] [Accepted: 02/23/2017] [Indexed: 01/08/2023] Open
Abstract
Healthy aging is associated with a decline in cognitive, executive, and motor processes that are concomitant with changes in brain activation patterns, particularly at high complexity levels. While speech production relies on all these processes, and is known to decline with age, the mechanisms that underlie these changes remain poorly understood, despite the importance of communication on everyday life. In this cross-sectional group study, we investigated age differences in the neuromotor control of speech production by combining behavioral and functional magnetic resonance imaging (fMRI) data. Twenty-seven healthy adults underwent fMRI while performing a speech production task consisting in the articulation of nonwords of different sequential and motor complexity. Results demonstrate strong age differences in movement time (MT), with longer and more variable MT in older adults. The fMRI results revealed extensive age differences in the relationship between BOLD signal and MT, within and outside the sensorimotor system. Moreover, age differences were also found in relation to sequential complexity within the motor and attentional systems, reflecting both compensatory and de-differentiation mechanisms. At very high complexity level (high motor complexity and high sequence complexity), age differences were found in both MT data and BOLD response, which increased in several sensorimotor and executive control areas. Together, these results suggest that aging of motor and executive control mechanisms may contribute to age differences in speech production. These findings highlight the importance of studying functionally relevant behavior such as speech to understand the mechanisms of human brain aging. Hum Brain Mapp 38:2751-2771, 2017. © 2017 Wiley Periodicals, Inc.
Collapse
Affiliation(s)
- Pascale Tremblay
- Université Laval, Departement de ReadaptationFaculté de MedecineQuebec CityQuebecCanada
- Centre de Recherche de l'Institut Universitaire en Sante Mentale de QuébecQuebec CityQuebecCanada
| | - Marc Sato
- Laboratoire Parole & LangageUniversité Aix‐Marseille, CNRSAix‐en‐ProvenceFrance
| | - Isabelle Deschamps
- Université Laval, Departement de ReadaptationFaculté de MedecineQuebec CityQuebecCanada
- Centre de Recherche de l'Institut Universitaire en Sante Mentale de QuébecQuebec CityQuebecCanada
| |
Collapse
|
16
|
Mody M, Shui AM, Nowinski LA, Golas SB, Ferrone C, O’Rourke JA, McDougle CJ. Communication Deficits and the Motor System: Exploring Patterns of Associations in Autism Spectrum Disorder (ASD). J Autism Dev Disord 2016; 47:155-162. [DOI: 10.1007/s10803-016-2934-y] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
Manca AD, Grimaldi M. Vowels and Consonants in the Brain: Evidence from Magnetoencephalographic Studies on the N1m in Normal-Hearing Listeners. Front Psychol 2016; 7:1413. [PMID: 27713712 PMCID: PMC5031792 DOI: 10.3389/fpsyg.2016.01413] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Accepted: 09/05/2016] [Indexed: 01/07/2023] Open
Abstract
Speech sound perception is one of the most fascinating tasks performed by the human brain. It involves a mapping from continuous acoustic waveforms onto the discrete phonological units computed to store words in the mental lexicon. In this article, we review the magnetoencephalographic studies that have explored the timing and morphology of the N1m component to investigate how vowels and consonants are computed and represented within the auditory cortex. The neurons that are involved in the N1m act to construct a sensory memory of the stimulus due to spatially and temporally distributed activation patterns within the auditory cortex. Indeed, localization of auditory fields maps in animals and humans suggested two levels of sound coding, a tonotopy dimension for spectral properties and a tonochrony dimension for temporal properties of sounds. When the stimulus is a complex speech sound, tonotopy and tonochrony data may give important information to assess whether the speech sound parsing and decoding are generated by pure bottom-up reflection of acoustic differences or whether they are additionally affected by top-down processes related to phonological categories. Hints supporting pure bottom-up processing coexist with hints supporting top-down abstract phoneme representation. Actually, N1m data (amplitude, latency, source generators, and hemispheric distribution) are limited and do not help to disentangle the issue. The nature of these limitations is discussed. Moreover, neurophysiological studies on animals and neuroimaging studies on humans have been taken into consideration. We compare also the N1m findings with the investigation of the magnetic mismatch negativity (MMNm) component and with the analogous electrical components, the N1 and the MMN. We conclude that N1 seems more sensitive to capture lateralization and hierarchical processes than N1m, although the data are very preliminary. Finally, we suggest that MEG data should be integrated with EEG data in the light of the neural oscillations framework and we propose some concerns that should be addressed by future investigations if we want to closely line up language research with issues at the core of the functional brain mechanisms.
Collapse
Affiliation(s)
- Anna Dora Manca
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| | - Mirko Grimaldi
- Dipartimento di Studi Umanistici, Centro di Ricerca Interdisciplinare sul Linguaggio, University of SalentoLecce, Italy; Laboratorio Diffuso di Ricerca Interdisciplinare Applicata alla MedicinaLecce, Italy
| |
Collapse
|
18
|
Schomers MR, Pulvermüller F. Is the Sensorimotor Cortex Relevant for Speech Perception and Understanding? An Integrative Review. Front Hum Neurosci 2016; 10:435. [PMID: 27708566 PMCID: PMC5030253 DOI: 10.3389/fnhum.2016.00435] [Citation(s) in RCA: 74] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/15/2016] [Indexed: 11/21/2022] Open
Abstract
In the neuroscience of language, phonemes are frequently described as multimodal units whose neuronal representations are distributed across perisylvian cortical regions, including auditory and sensorimotor areas. A different position views phonemes primarily as acoustic entities with posterior temporal localization, which are functionally independent from frontoparietal articulatory programs. To address this current controversy, we here discuss experimental results from functional magnetic resonance imaging (fMRI) as well as transcranial magnetic stimulation (TMS) studies. On first glance, a mixed picture emerges, with earlier research documenting neurofunctional distinctions between phonemes in both temporal and frontoparietal sensorimotor systems, but some recent work seemingly failing to replicate the latter. Detailed analysis of methodological differences between studies reveals that the way experiments are set up explains whether sensorimotor cortex maps phonological information during speech perception or not. In particular, acoustic noise during the experiment and ‘motor noise’ caused by button press tasks work against the frontoparietal manifestation of phonemes. We highlight recent studies using sparse imaging and passive speech perception tasks along with multivariate pattern analysis (MVPA) and especially representational similarity analysis (RSA), which succeeded in separating acoustic-phonological from general-acoustic processes and in mapping specific phonological information on temporal and frontoparietal regions. The question about a causal role of sensorimotor cortex on speech perception and understanding is addressed by reviewing recent TMS studies. We conclude that frontoparietal cortices, including ventral motor and somatosensory areas, reflect phonological information during speech perception and exert a causal influence on language understanding.
Collapse
Affiliation(s)
- Malte R Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität BerlinBerlin, Germany; Berlin School of Mind and Brain, Humboldt-Universität zu BerlinBerlin, Germany
| |
Collapse
|
19
|
Fuertinger S, Simonyan K. Stability of Network Communities as a Function of Task Complexity. J Cogn Neurosci 2016; 28:2030-2043. [PMID: 27575646 DOI: 10.1162/jocn_a_01026] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The analysis of the community architecture in functional brain networks has revealed important relations between specific behavioral patterns and characteristic features of the associated functional organization. Numerous studies have assessed changes in functional communities during different states of awareness, learning, information processing, and various behavioral patterns. The robustness of detected communities within a network has been an often-discussed topic in complex systems research. However, our knowledge regarding the intersubject stability of functional communities in the human brain while performing different tasks is still lacking. In this study, we examined the variability of functional communities in weighted undirected graphs based on fMRI recordings of healthy participants across three conditions: the resting state, syllable production as a simple vocal motor task, and meaningful speech production representing a complex behavioral pattern with cognitive involvement. On the basis of the constructed empirical networks, we simulated a large cohort of artificial graphs and performed a leave-one-out stability analysis to assess the sensitivity of communities in the group-averaged networks with respect to perturbations in the averaging cohort. We found that the stability of partitions derived from group-averaged networks depended on task complexity. The determined community architecture in mean networks reflected within-behavior network stability and between-behavior flexibility of the human functional connectome. The sensitivity of functional communities increased from rest to syllable production to speaking, which suggests that the approximation quality of the community structure in the average network to reflect individual per-participant partitions depends on task complexity.
Collapse
|
20
|
Almeida D, Poeppel D, Corina D. The Processing of Biologically Plausible and Implausible forms in American Sign Language: Evidence for Perceptual Tuning. LANGUAGE, COGNITION AND NEUROSCIENCE 2015; 31:361-374. [PMID: 27135041 PMCID: PMC4849140 DOI: 10.1080/23273798.2015.1100315] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2015] [Accepted: 09/16/2015] [Indexed: 05/29/2023]
Abstract
The human auditory system distinguishes speech-like information from general auditory signals in a remarkably fast and efficient way. Combining psychophysics and neurophysiology (MEG), we demonstrate a similar result for the processing of visual information used for language communication in users of sign languages. We demonstrate that the earliest visual cortical responses in deaf signers viewing American Sign Language (ASL) signs show specific modulations to violations of anatomic constraints that would make the sign either possible or impossible to articulate. These neural data are accompanied with a significantly increased perceptual sensitivity to the anatomical incongruity. The differential effects in the early visual evoked potentials arguably reflect an expectation-driven assessment of somatic representational integrity, suggesting that language experience and/or auditory deprivation may shape the neuronal mechanisms underlying the analysis of complex human form. The data demonstrate that the perceptual tuning that underlies the discrimination of language and non-language information is not limited to spoken languages but extends to languages expressed in the visual modality.
Collapse
Affiliation(s)
- Diogo Almeida
- Division of Sciences, Psychology program, New York University – Abu Dhabi, Abu Dhabi, UAE
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Department of Neuroscience, Max-Planck-Institute (MPIEA), Frankfurt, Germany
| | - David Corina
- Department of Linguistics and the Center for Mind and Brain, University of California, Davis, CA, USA
| |
Collapse
|
21
|
Abstract
In the past few years, several studies have been directed to understanding the complexity of functional interactions between different brain regions during various human behaviors. Among these, neuroimaging research installed the notion that speech and language require an orchestration of brain regions for comprehension, planning, and integration of a heard sound with a spoken word. However, these studies have been largely limited to mapping the neural correlates of separate speech elements and examining distinct cortical or subcortical circuits involved in different aspects of speech control. As a result, the complexity of the brain network machinery controlling speech and language remained largely unknown. Using graph theoretical analysis of functional MRI (fMRI) data in healthy subjects, we quantified the large-scale speech network topology by constructing functional brain networks of increasing hierarchy from the resting state to motor output of meaningless syllables to complex production of real-life speech as well as compared to non-speech-related sequential finger tapping and pure tone discrimination networks. We identified a segregated network of highly connected local neural communities (hubs) in the primary sensorimotor and parietal regions, which formed a commonly shared core hub network across the examined conditions, with the left area 4p playing an important role in speech network organization. These sensorimotor core hubs exhibited features of flexible hubs based on their participation in several functional domains across different networks and ability to adaptively switch long-range functional connectivity depending on task content, resulting in a distinct community structure of each examined network. Specifically, compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively forged the formation of the functional speech connectome. In addition, the observed capacity of the primary sensorimotor cortex to exhibit operational heterogeneity challenged the established concept of unimodality of this region. This study uses graph theory to analyze functional MRI data recorded from speakers as they produce single syllables or whole sentences, revealing the complexity of the brain network machinery that controls speech and language. Speech production is a complex process that requires the orchestration of multiple brain regions. However, our current understanding of the large-scale neural architecture during speaking remains scant, as research has mostly focused on examining distinct brain circuits involved in distinct aspects of speech control. Here, we performed graph theoretical analyses of functional MRI data acquired from healthy subjects in order to reveal how brain regions relate to one another while speaking. We constructed functional brain networks of increasing hierarchy from rest to simple vocal motor output to the production of real-life speech, and compared these to nonspeech control tasks such as finger tapping and pure tone discrimination. We discovered a specialized network of densely connected sensorimotor regions, which formed a common processing core across all conditions. Specifically, the primary sensorimotor cortex participated in multiple functional domains across different networks and modulated long-range connections depending on task content, which challenges the established concept of low-order unimodal function of this region. Compared to other tasks, speech production was characterized by the formation of six distinct neural communities with specialized recruitment of the prefrontal cortex, insula, putamen, and thalamus, which collectively formed the functional speech connectome.
Collapse
Affiliation(s)
- Stefan Fuertinger
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
| | - Barry Horwitz
- Brain Imaging and Modeling Section, National Institute on Deafness and Other Communication Disorders, National Institutes of Health, Bethesda, Maryland, United States of America
| | - Kristina Simonyan
- Department of Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- Department of Otolaryngology, Icahn School of Medicine at Mount Sinai, New York, New York, United States of America
- * E-mail:
| |
Collapse
|
22
|
Skipper JI. Echoes of the spoken past: how auditory cortex hears context during speech perception. Philos Trans R Soc Lond B Biol Sci 2015; 369:20130297. [PMID: 25092665 PMCID: PMC4123676 DOI: 10.1098/rstb.2013.0297] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.
Collapse
Affiliation(s)
- Jeremy I Skipper
- Department of Cognitive, Perceptual and Brain Sciences, Institute for Multimodal Communication, University College London, London, WC1H 0AP, UK
| |
Collapse
|
23
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 03/03/2015] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobule (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and audio-visual integration. I propose that the primary role of the ADS in monkeys/apes is the perception and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Perception of contact calls occurs by the ADS detecting a voice, localizing it, and verifying that the corresponding face is out of sight. The auditory cortex then projects to parieto-frontal visuospatial regions (visual dorsal stream) for searching the caller, and via a series of frontal lobe-brainstem connections, a contact call is produced in return. Because the human ADS processes also speech production and repetition, I further describe a course for the development of speech in humans. I propose that, due to duplication of a parietal region and its frontal projections, and strengthening of direct frontal-brainstem connections, the ADS converted auditory input directly to vocal regions in the frontal lobe, which endowed early Hominans with partial vocal control. This enabled offspring to modify their contact calls with intonations for signaling different distress levels to their mother. Vocal control could then enable question-answer conversations, by offspring emitting a low-level distress call for inquiring about the safety of objects, and mothers responding with high- or low-level distress calls. Gradually, the ADS and the direct frontal-brainstem connections became more robust and vocal control became more volitional. Eventually, individuals were capable of inventing new words and offspring were capable of inquiring about objects in their environment and learning their names via mimicry.
Collapse
|
24
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004 DOI: 10.12688/f1000research.6175.3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 09/21/2017] [Indexed: 12/28/2022] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
25
|
Poliva O. From where to what: a neuroanatomically based evolutionary model of the emergence of speech in humans. F1000Res 2015; 4:67. [PMID: 28928931 PMCID: PMC5600004.2 DOI: 10.12688/f1000research.6175.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 01/12/2016] [Indexed: 03/28/2024] Open
Abstract
In the brain of primates, the auditory cortex connects with the frontal lobe via the temporal pole (auditory ventral stream; AVS) and via the inferior parietal lobe (auditory dorsal stream; ADS). The AVS is responsible for sound recognition, and the ADS for sound-localization, voice detection and integration of calls with faces. I propose that the primary role of the ADS in non-human primates is the detection and response to contact calls. These calls are exchanged between tribe members (e.g., mother-offspring) and are used for monitoring location. Detection of contact calls occurs by the ADS identifying a voice, localizing it, and verifying that the corresponding face is out of sight. Once a contact call is detected, the primate produces a contact call in return via descending connections from the frontal lobe to a network of limbic and brainstem regions. Because the ADS of present day humans also performs speech production, I further propose an evolutionary course for the transition from contact call exchange to an early form of speech. In accordance with this model, structural changes to the ADS endowed early members of the genus Homo with partial vocal control. This development was beneficial as it enabled offspring to modify their contact calls with intonations for signaling high or low levels of distress to their mother. Eventually, individuals were capable of participating in yes-no question-answer conversations. In these conversations the offspring emitted a low-level distress call for inquiring about the safety of objects (e.g., food), and his/her mother responded with a high- or low-level distress call to signal approval or disapproval of the interaction. Gradually, the ADS and its connections with brainstem motor regions became more robust and vocal control became more volitional. Speech emerged once vocal control was sufficient for inventing novel calls.
Collapse
|
26
|
Bornkessel-Schlesewsky I, Schlesewsky M, Small SL, Rauschecker JP. Neurobiological roots of language in primate audition: common computational properties. Trends Cogn Sci 2015; 19:142-50. [PMID: 25600585 PMCID: PMC4348204 DOI: 10.1016/j.tics.2014.12.008] [Citation(s) in RCA: 125] [Impact Index Per Article: 13.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2014] [Revised: 12/06/2014] [Accepted: 12/12/2014] [Indexed: 11/26/2022]
Abstract
Here, we present a new perspective on an old question: how does the neurobiology of human language relate to brain systems in nonhuman primates? We argue that higher-order language combinatorics, including sentence and discourse processing, can be situated in a unified, cross-species dorsal-ventral streams architecture for higher auditory processing, and that the functions of the dorsal and ventral streams in higher-order language processing can be grounded in their respective computational properties in primate audition. This view challenges an assumption, common in the cognitive sciences, that a nonhuman primate model forms an inherently inadequate basis for modeling higher-level language functions.
Collapse
Affiliation(s)
- Ina Bornkessel-Schlesewsky
- Cognitive Neuroscience Laboratory, School of Psychology, Social Work and Social Policy, University of South Australia, Adelaide, SA, Australia; Department of Germanic Linguistics, University of Marburg, Marburg, Germany.
| | - Matthias Schlesewsky
- Department of English and Linguistics, Johannes Gutenberg-University, Mainz, Germany
| | - Steven L Small
- Department of Neurology, University of California, Irvine, CA, USA
| | - Josef P Rauschecker
- Laboratory of Integrative Neuroscience and Cognition, Department of Neuroscience, Georgetown University Medical Center, Washington DC, USA; Institute for Advanced Study, Technische Universität München, Garching, Germany
| |
Collapse
|
27
|
Archila-Suerte P, Zevin J, Hernandez AE. The effect of age of acquisition, socioeducational status, and proficiency on the neural processing of second language speech sounds. BRAIN AND LANGUAGE 2015; 141:35-49. [PMID: 25528287 PMCID: PMC5956909 DOI: 10.1016/j.bandl.2014.11.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/18/2013] [Revised: 11/06/2014] [Accepted: 11/09/2014] [Indexed: 06/02/2023]
Abstract
This study investigates the role of age of acquisition (AoA), socioeducational status (SES), and second language (L2) proficiency on the neural processing of L2 speech sounds. In a task of pre-attentive listening and passive viewing, Spanish-English bilinguals and a control group of English monolinguals listened to English syllables while watching a film of natural scenery. Eight regions of interest were selected from brain areas involved in speech perception and executive processes. The regions of interest were examined in 2 separate two-way ANOVA (AoA×SES; AoA×L2 proficiency). The results showed that AoA was the main variable affecting the neural response in L2 speech processing. Direct comparisons between AoA groups of equivalent SES and proficiency level enhanced the intensity and magnitude of the results. These results suggest that AoA, more than SES and proficiency level, determines which brain regions are recruited for the processing of second language speech sounds.
Collapse
Affiliation(s)
| | - Jason Zevin
- Sackler Institute for Developmental Psychobiology, Weill Medical College of Cornell University, 1300 York Ave., Box 140, NY, NY 10065, United States.
| | | |
Collapse
|
28
|
Otani VHO, Shiozawa P, Cordeiro Q, Uchida RR. A systematic review and meta-analysis of the use of repetitive transcranial magnetic stimulation for auditory hallucinations treatment in refractory schizophrenic patients. Int J Psychiatry Clin Pract 2015; 19:228-32. [PMID: 25356661 DOI: 10.3109/13651501.2014.980830] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
BACKGROUND The use of repetitive transcranial magnetic stimulation (rTMS) remains a promising therapeutic tool in the treatment of schizophrenia. Symptoms such as auditory hallucinations (AH) find contradictory results in many studies. Here we present an up-to-date systematic review and meta-analysis of rTMS in the treatment of AH in schizophrenia. METHODS We searched Pubmed-MEDLINE from 1999 to 2013 for double-blinded randomized sham-controlled trials that applied slow rTMS on the left temporoparietal cortex and assessed the outcome results using Hallucination Change Scale or Auditory Hallucination Rating Scale or Scale for Auditory Hallucinations (SAH). We identified 10 studies suitable for the meta-analysis. RESULTS We found a positive sized effect in favor of rTMS [random-effects model Hedges' g = 0.011, I-squared = 58.1%]. There was some variability between study effect sizes, but the sensitivity analysis concluded that none of them had sufficient weight to singularly alter the results of our meta-analysis. DISCUSSION rTMS appears to be an effective treatment for AH. The left temporoparietal cortex seems to be the area in which rTMS is effective. Although meta-analysis is a powerful analytical tool, more studies must be conducted in order to obtain a more expressive sample size to perform a more accurate analytical approach.
Collapse
Affiliation(s)
- Victor Henrique Oyamada Otani
- a Centro de Atenção Integrada em Saúde Mental, Faculdade de Ciências Médicas da Santa Casa de Misericórdia de São Paulo, Psychiatry , São Paulo , Brazil
| | - Pedro Shiozawa
- a Centro de Atenção Integrada em Saúde Mental, Faculdade de Ciências Médicas da Santa Casa de Misericórdia de São Paulo, Psychiatry , São Paulo , Brazil
| | - Quirino Cordeiro
- a Centro de Atenção Integrada em Saúde Mental, Faculdade de Ciências Médicas da Santa Casa de Misericórdia de São Paulo, Psychiatry , São Paulo , Brazil
| | - Ricardo Ryoiti Uchida
- a Centro de Atenção Integrada em Saúde Mental, Faculdade de Ciências Médicas da Santa Casa de Misericórdia de São Paulo, Psychiatry , São Paulo , Brazil
| |
Collapse
|
29
|
Oh A, Duerden EG, Pang EW. The role of the insula in speech and language processing. BRAIN AND LANGUAGE 2014; 135:96-103. [PMID: 25016092 PMCID: PMC4885738 DOI: 10.1016/j.bandl.2014.06.003] [Citation(s) in RCA: 130] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2013] [Revised: 01/24/2014] [Accepted: 06/15/2014] [Indexed: 05/13/2023]
Abstract
Lesion and neuroimaging studies indicate that the insula mediates motor aspects of speech production, specifically, articulatory control. Although it has direct connections to Broca's area, the canonical speech production region, the insula is also broadly connected with other speech and language centres, and may play a role in coordinating higher-order cognitive aspects of speech and language production. The extent of the insula's involvement in speech and language processing was assessed using the Activation Likelihood Estimation (ALE) method. Meta-analyses of 42 fMRI studies with healthy adults were performed, comparing insula activation during performance of language (expressive and receptive) and speech (production and perception) tasks. Both tasks activated bilateral anterior insulae. However, speech perception tasks preferentially activated the left dorsal mid-insula, whereas expressive language tasks activated left ventral mid-insula. Results suggest distinct regions of the mid-insula play different roles in speech and language processing.
Collapse
Affiliation(s)
- Anna Oh
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada
| | - Emma G Duerden
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada; Diagnostic Imaging, Hospital for Sick Children, Toronto, Canada; Department of Paediatrics, University of Toronto, Toronto, Canada
| | - Elizabeth W Pang
- Neurosciences and Mental Health, SickKids Research Institute, Toronto, Canada; Neurology, Hospital for Sick Children, Toronto, Canada; Department of Paediatrics, University of Toronto, Toronto, Canada.
| |
Collapse
|
30
|
Review of the efficacy of transcranial magnetic stimulation for auditory verbal hallucinations. Biol Psychiatry 2014; 76:101-10. [PMID: 24315551 DOI: 10.1016/j.biopsych.2013.09.038] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/04/2013] [Revised: 09/05/2013] [Accepted: 09/26/2013] [Indexed: 12/19/2022]
Abstract
With an increase of the number of studies exploring repetitive transcranial magnetic stimulation (rTMS) for the treatment of auditory verbal hallucinations (AVH), an update is provided on the efficacy of different paradigms. A literature search was performed from 1966 through April 2013. Twenty-five randomized controlled trials using the severity of AVH or psychosis as outcome measures were included. Standardized mean weighted effect sizes were computed; a qualitative review of the literature was performed to assess the effects of various rTMS paradigms. rTMS versus sham treatment for AVH yielded a mean weighted effect size of .44. No significant mean weighted effect size was found for the severity of psychosis (i.e., .21). For patients with medication-resistant AVH, the mean weighted effect size was .45. rTMS applied at the left temporoparietal area with a frequency of 1 Hz yielded a moderate mean weighted effect size of .63, indicating superiority of this paradigm. Various other paradigms failed to show superior effects. rTMS applied at the right temporoparietal area was not superior to sham treatment. rTMS, especially when applied at the left temporoparietal area with a frequency of 1 Hz, is effective for the treatment of AVH, including in patients with medication-resistant AVH. The results for other rTMS paradigms are disappointing thus far. A next step should be to explore the effects of rTMS in medication-free individuals, for example, during the initial phases of psychosis, and in patients with diagnoses other than schizophrenia who do not have comorbid psychotic symptoms.
Collapse
|
31
|
Abstract
Historic theories of speech perception (Motor Theory and Analysis by Synthesis) invoked listeners' knowledge of speech production to explain speech perception. Neuroimaging data show that adult listeners activate motor brain areas during speech perception. In two experiments using magnetoencephalography (MEG), we investigated motor brain activation, as well as auditory brain activation, during discrimination of native and nonnative syllables in infants at two ages that straddle the developmental transition from language-universal to language-specific speech perception. Adults are also tested in Exp. 1. MEG data revealed that 7-mo-old infants activate auditory (superior temporal) as well as motor brain areas (Broca's area, cerebellum) in response to speech, and equivalently for native and nonnative syllables. However, in 11- and 12-mo-old infants, native speech activates auditory brain areas to a greater degree than nonnative, whereas nonnative speech activates motor brain areas to a greater degree than native speech. This double dissociation in 11- to 12-mo-old infants matches the pattern of results obtained in adult listeners. Our infant data are consistent with Analysis by Synthesis: auditory analysis of speech is coupled with synthesis of the motor plans necessary to produce the speech signal. The findings have implications for: (i) perception-action theories of speech perception, (ii) the impact of "motherese" on early language learning, and (iii) the "social-gating" hypothesis and humans' development of social understanding.
Collapse
|
32
|
Deschamps I, Tremblay P. Sequencing at the syllabic and supra-syllabic levels during speech perception: an fMRI study. Front Hum Neurosci 2014; 8:492. [PMID: 25071521 PMCID: PMC4086203 DOI: 10.3389/fnhum.2014.00492] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Accepted: 06/17/2014] [Indexed: 11/13/2022] Open
Abstract
The processing of fluent speech involves complex computational steps that begin with the segmentation of the continuous flow of speech sounds into syllables and words. One question that naturally arises pertains to the type of syllabic information that speech processes act upon. Here, we used functional magnetic resonance imaging to profile regions, using a combination of whole-brain and exploratory anatomical region-of-interest (ROI) approaches, that were sensitive to syllabic information during speech perception by parametrically manipulating syllabic complexity along two dimensions: (1) individual syllable complexity, and (2) sequence complexity (supra-syllabic). We manipulated the complexity of the syllable by using the simplest syllable template—a consonant and vowel (CV)-and inserting an additional consonant to create a complex onset (CCV). The supra-syllabic complexity was manipulated by creating sequences composed of the same syllable repeated six times (e.g., /pa-pa-pa-pa-pa-pa/) and sequences of three different syllables each repeated twice (e.g., /pa-ta-ka-pa-ta-ka/). This parametrical design allowed us to identify brain regions sensitive to (1) syllabic complexity independent of supra-syllabic complexity, (2) supra-syllabic complexity independent of syllabic complexity and, (3) both syllabic and supra-syllabic complexity. High-resolution scans were acquired for 15 healthy adults. An exploratory anatomical ROI analysis of the supratemporal plane (STP) identified bilateral regions within the anterior two-third of the planum temporale, the primary auditory cortices as well as the anterior two-third of the superior temporal gyrus that showed different patterns of sensitivity to syllabic and supra-syllabic information. These findings demonstrate that during passive listening of syllable sequences, sublexical information is processed automatically, and sensitivity to syllabic and supra-syllabic information is localized almost exclusively within the STP.
Collapse
Affiliation(s)
- Isabelle Deschamps
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| | - Pascale Tremblay
- Département de Réadaptation, Université Laval Québec City, QC, Canada ; Centre de recherche de l'Institut universitaire en santé mentale de Québec Québec City, QC, Canada
| |
Collapse
|
33
|
Avivi-Reich M, Daneman M, Schneider BA. How age and linguistic competence alter the interplay of perceptual and cognitive factors when listening to conversations in a noisy environment. Front Syst Neurosci 2014; 8:21. [PMID: 24578684 PMCID: PMC3933794 DOI: 10.3389/fnsys.2014.00021] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2013] [Accepted: 01/27/2014] [Indexed: 11/23/2022] Open
Abstract
Multi-talker conversations challenge the perceptual and cognitive capabilities of older adults and those listening in their second language (L2). In older adults these difficulties could reflect declines in the auditory, cognitive, or linguistic processes supporting speech comprehension. The tendency of L2 listeners to invoke some of the semantic and syntactic processes from their first language (L1) may interfere with speech comprehension in L2. These challenges might also force them to reorganize the ways in which they perceive and process speech, thereby altering the balance between the contributions of bottom-up vs. top-down processes to speech comprehension. Younger and older L1s as well as young L2s listened to conversations played against a babble background, with or without spatial separation between the talkers and masker, when the spatial positions of the stimuli were specified either by loudspeaker placements (real location), or through use of the precedence effect (virtual location). After listening to a conversation, the participants were asked to answer questions regarding its content. Individual hearing differences were compensated for by creating the same degree of difficulty in identifying individual words in babble. Once compensation was applied, the number of questions correctly answered increased when a real or virtual spatial separation was introduced between babble and talkers. There was no evidence that performance differed between real and virtual locations. The contribution of vocabulary knowledge to dialog comprehension was found to be larger in the virtual conditions than in the real whereas the contribution of reading comprehension skill did not depend on the listening environment but rather differed as a function of age and language proficiency. The results indicate that the acoustic scene and the cognitive and linguistic competencies of listeners modulate how and when top-down resources are engaged in aid of speech comprehension.
Collapse
Affiliation(s)
- Meital Avivi-Reich
- Human Communication Laboratory, Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Meredyth Daneman
- Human Communication Laboratory, Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Bruce A Schneider
- Human Communication Laboratory, Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| |
Collapse
|
34
|
Baart M, Stekelenburg JJ, Vroomen J. Electrophysiological evidence for speech-specific audiovisual integration. Neuropsychologia 2014; 53:115-21. [DOI: 10.1016/j.neuropsychologia.2013.11.011] [Citation(s) in RCA: 69] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2013] [Revised: 11/07/2013] [Accepted: 11/19/2013] [Indexed: 11/26/2022]
|
35
|
Transcranial magnetic stimulation for the treatment of pharmacoresistant nondelusional auditory verbal hallucinations in dementia. Case Rep Psychiatry 2013; 2013:930304. [PMID: 24198993 PMCID: PMC3808098 DOI: 10.1155/2013/930304] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2013] [Accepted: 09/15/2013] [Indexed: 11/17/2022] Open
Abstract
Auditory verbal hallucinations (AVHs) are known as a core symptom of schizophrenia, but also occur in a number of other conditions, not least in neurodegenerative disorders such as dementia. In the last decades, Transcranial Magnetic Stimulation (TMS) emerged as a valuable therapeutic approach towards several neurological and psychiatric diseases, including AVHs. Herein we report a case of a seventy-six-years-old woman with vascular-degenerative brain disease, complaining of threatening AVHs. The patient was treated with a high-frequency temporoparietal (T3P3) rTMS protocol for fifteen days. A considerable reduction of AVHs in frequency and content (no more threatening) was observed. Although further research is needed, this seems an encouraging result.
Collapse
|
36
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
37
|
Thompson HE, Jefferies E. Semantic control and modality: An input processing deficit in aphasia leading to deregulated semantic cognition in a single modality. Neuropsychologia 2013; 51:1998-2015. [DOI: 10.1016/j.neuropsychologia.2013.06.030] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2013] [Revised: 06/27/2013] [Accepted: 06/29/2013] [Indexed: 10/26/2022]
|
38
|
Mou X, Bai F, Xie C, Shi J, Yao Z, Hao G, Chen N, Zhang Z. Voice recognition and altered connectivity in schizophrenic patients with auditory hallucinations. Prog Neuropsychopharmacol Biol Psychiatry 2013; 44:265-70. [PMID: 23545112 DOI: 10.1016/j.pnpbp.2013.03.006] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Revised: 03/06/2013] [Accepted: 03/19/2013] [Indexed: 11/17/2022]
Abstract
Auditory verbal hallucination (AVH) is a pathological hallmark of schizophrenia; however, their neural basis is unclear. Voice identity is an important phenomenological feature of AVHs. Certain voice identity recognition deficits are specific to schizophrenic patients with AVHs. We tested our hypothesis that among schizophrenia patients with hallucination, dysfunctional voice identity recognition is associated with poor functional integration in the neural networks involved in the evaluation of voice identity. Using functional magnetic resonance imaging (fMRI) during a voice recognition task, we examined the modulation of neural network connectivity in 26 schizophrenic patients with or without AVHs, and 13 healthy controls. Our results showed that the schizophrenic patients with AVHs had altered frontotemporal connectivity compared to the schizophrenic patients without AVHs and healthy controls. The latter two groups did not show any differences in functional connectivity. In addition, the strength of frontotemporal connectivity was correlated with the accuracy of voice recognition. These findings provide preliminary evidence that impaired functional integration may contribute to the faulty appraisal of voice identity in schizophrenic patients with AVHs.
Collapse
Affiliation(s)
- Xiaodong Mou
- Medical School of Southeast University, Nanjing 210009, PR China
| | | | | | | | | | | | | | | |
Collapse
|
39
|
Hoffman RE, Wu K, Pittman B, Cahill JD, Hawkins KA, Fernandez T, Hannestad J. Transcranial magnetic stimulation of Wernicke's and Right homologous sites to curtail "voices": a randomized trial. Biol Psychiatry 2013; 73:1008-14. [PMID: 23485015 PMCID: PMC3641174 DOI: 10.1016/j.biopsych.2013.01.016] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/26/2012] [Revised: 12/04/2012] [Accepted: 01/11/2013] [Indexed: 11/28/2022]
Abstract
BACKGROUND Auditory/verbal hallucinations (AVHs) are accompanied by activation in Wernicke's and right homologous regions. Efficacy in curtailing AVHs via 1-Hz repetitive magnetic stimulation (rTMS) targeting a site in each region ("W" and "rW") was therefore studied. METHODS Patients with schizophrenia and AVHs (N = 83) were randomly allocated to double-masked rTMS versus sham stimulation, with blocks of five sessions given to W and rW in random order, followed by five sessions to the site yielding greater improvement. The primary outcome measure was the Hallucination Change Score (HCS). Hallucination frequency, total auditory hallucination rating scale score, and clinical global improvement were secondary outcome measures. Attentional salience of AVHs and neuropsychological measures of laterality were studied as predictors of site-specific response. RESULTS After 15 sessions, rTMS produced significant improvements relative to sham stimulation for hallucination frequency and clinical global improvement but not for HCS. After limiting analyses to patients whose motor threshold was detected consistently: 1) endpoint HCS demonstrated significantly greater improvement for rTMS compared with sham stimulation; 2) for high-salience AVHs, rTMS to rW after the first five sessions yielded significantly improved HCS scores relative to sham stimulation, whereas for low salience AVHs, rTMS to W produced this finding. Nondominant motor impairment correlated positively with hallucination improvement following rW rTMS. CONCLUSIONS One-hertz rTMS per our site-optimization protocol produced some clinical benefit in patients with persistent AVHs as a group, especially when motor threshold was consistently detected. Level of hallucination salience may usefully guide selection of W versus rW as intervention sites.
Collapse
Affiliation(s)
- Ralph E Hoffman
- Department of Psychiatry, Yale University School of Medicine, New Haven, CT, USA.
| | | | | | | | | | | | | |
Collapse
|
40
|
Harinen K, Rinne T. Activations of human auditory cortex to phonemic and nonphonemic vowels during discrimination and memory tasks. Neuroimage 2013; 77:279-87. [PMID: 23567885 DOI: 10.1016/j.neuroimage.2013.03.064] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2012] [Revised: 03/20/2013] [Accepted: 03/23/2013] [Indexed: 12/01/2022] Open
Abstract
We used fMRI to investigate activations within human auditory cortex (AC) to vowels during vowel discrimination, vowel (categorical n-back) memory, and visual tasks. Based on our previous studies, we hypothesized that the vowel discrimination task would be associated with increased activations in the anterior superior temporal gyrus (STG), while the vowel memory task would enhance activations in the posterior STG and inferior parietal lobule (IPL). In particular, we tested the hypothesis that activations in the IPL during vowel memory tasks are associated with categorical processing. Namely, activations due to categorical processing should be higher during tasks performed on nonphonemic (hard to categorize) than on phonemic (easy to categorize) vowels. As expected, we found distinct activation patterns during vowel discrimination and vowel memory tasks. Further, these task-dependent activations were different during tasks performed on phonemic or nonphonemic vowels. However, activations in the IPL associated with the vowel memory task were not stronger during nonphonemic than phonemic vowel blocks. Together these results demonstrate that activations in human AC to vowels depend on both the requirements of the behavioral task and the phonemic status of the vowels.
Collapse
Affiliation(s)
- Kirsi Harinen
- Institute of Behavioural Sciences, University of Helsinki, Finland
| | | |
Collapse
|
41
|
Moreno-Torres I, Berthier ML, Mar Cid MD, Green C, Gutiérrez A, García-Casares N, Froudist Walsh S, Nabrozidis A, Sidorova J, Dávila G, Carnero-Pardo C. Foreign accent syndrome: A multimodal evaluation in the search of neuroscience-driven treatments. Neuropsychologia 2013. [DOI: 10.1016/j.neuropsychologia.2012.11.010] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
|
42
|
Robson H, Grube M, Lambon Ralph MA, Griffiths TD, Sage K. Fundamental deficits of auditory perception in Wernicke's aphasia. Cortex 2012; 49:1808-22. [PMID: 23351849 DOI: 10.1016/j.cortex.2012.11.012] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2012] [Revised: 07/30/2012] [Accepted: 11/27/2012] [Indexed: 10/27/2022]
Abstract
OBJECTIVE This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. METHODS We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. RESULTS Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. CONCLUSION These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Collapse
Affiliation(s)
- Holly Robson
- Neuroscience and Aphasia Research Unit, University of Manchester, UK; Psychology and Clinical Language Sciences, University of Reading, UK.
| | | | | | | | | |
Collapse
|
43
|
Slotema CW, Aleman A, Daskalakis ZJ, Sommer IE. Meta-analysis of repetitive transcranial magnetic stimulation in the treatment of auditory verbal hallucinations: update and effects after one month. Schizophr Res 2012; 142:40-5. [PMID: 23031191 DOI: 10.1016/j.schres.2012.08.025] [Citation(s) in RCA: 83] [Impact Index Per Article: 6.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/10/2012] [Revised: 08/20/2012] [Accepted: 08/28/2012] [Indexed: 12/24/2022]
Abstract
OBJECTIVE Several meta-analyses considering repetitive transcranial magnetic stimulation (rTMS) for auditory verbal hallucinations (AVH) have been performed with moderate to high mean weighted effect sizes. Since then several negative findings were reported in relatively large samples. The aim of this study was to provide an update of the literature on the efficacy of rTMS for AVH and to investigate the effect of rTMS one month after the end of treatment. DATA SOURCES A literature search was performed from 1966 through August 2012 using Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, Database of Abstracts of Reviews of Effects, Embase Psychiatry, Ovid Medline, PsycINFO and PubMed. Randomized, double blind, sham-controlled studies with severity of AVH or severity of psychosis as an outcome measure were included. STUDY SELECTION Data were obtained from 17 randomized studies of rTMS for AVH. Five studies fulfilled the criteria for the meta-analysis on the effect of rTMS one month after the end of treatment. DATA EXTRACTION Standardized mean weighted effect sizes of rTMS versus sham were computed on pre- and posttreatment comparisons. DATA SYNTHESIS The mean weighted effect size of rTMS directed at the left temporoparietal area was 0.44 (95% CI 0.19-0.68). A separate meta-analysis including studies directing rTMS at other brain regions revealed a mean weighted effect size of 0.33 (95% CI 0.17-0.50) in favor of real TMS. The effect of rTMS was no longer significant at one month of follow-up (mean weighted effect size=0.40, 95% CI -0.23-0.102). Side effects were mild and the number of dropouts in the real TMS group was not significantly higher than in the sham group. CONCLUSIONS With the inclusion of studies with larger patient samples, the mean weighted effect size of rTMS directed at the left temporoparietal area for AVH has decreased, although the effect is still significant. The duration of the effect of rTMS may be less than one month. More research is needed in order to optimize parameters and further evaluate the clinical relevance of this intervention.
Collapse
Affiliation(s)
- C W Slotema
- Parnassia Bavo Psychiatric Institute, Lijnbaan 4, 2512 VA The Hague, The Netherlands.
| | | | | | | |
Collapse
|
44
|
Tremblay P, Baroni M, Hasson U. Processing of speech and non-speech sounds in the supratemporal plane: auditory input preference does not predict sensitivity to statistical structure. Neuroimage 2012; 66:318-32. [PMID: 23116815 DOI: 10.1016/j.neuroimage.2012.10.055] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2012] [Revised: 08/27/2012] [Accepted: 10/15/2012] [Indexed: 11/17/2022] Open
Abstract
The supratemporal plane contains several functionally heterogeneous subregions that respond strongly to speech. Much of the prior work on the issue of speech processing in the supratemporal plane has focused on neural responses to single speech vs. non-speech sounds rather than focusing on higher-level computations that are required to process more complex auditory sequences. Here we examined how information is integrated over time for speech and non-speech sounds by quantifying the BOLD fMRI response to stochastic (non-deterministic) sequences of speech and non-speech naturalistic sounds that varied in their statistical structure (from random to highly structured sequences) during passive listening. Behaviorally, the participants were accurate in segmenting speech and non-speech sequences, though they were more accurate for speech. Several supratemporal regions showed increased activation magnitude for speech sequences (preference), but, importantly, this did not predict sensitivity to statistical structure: (i) several areas showing a speech preference were sensitive to statistical structure in both speech and non-speech sequences, and (ii) several regions that responded to both speech and non-speech sounds showed distinct responses to statistical structure in speech and non-speech sequences. While the behavioral findings highlight the tight relation between statistical structure and segmentation processes, the neuroimaging results suggest that the supratemporal plane mediates complex statistical processing for both speech and non-speech sequences and emphasize the importance of studying the neurocomputations associated with auditory sequence processing. These findings identify new partitions of functionally distinct areas in the supratemporal plane that cannot be evoked by single stimuli. The findings demonstrate the importance of going beyond input preference to examine the neural computations implemented in the superior temporal plane.
Collapse
Affiliation(s)
- P Tremblay
- Université Laval, Rehabilitation Department, Québec City, Qc., Canada; Centre de Recherche de l'Institut Universitaire en santé mentale de Québec (CRIUSMQ), Québec City, Qc., Canada.
| | - M Baroni
- Center for Mind/Brain Sciences (CIMeC), University of Trento, via delle Regole, 1010, 38060, Mattarello (TN), Italy; Department of Information Science, University of Trento, via delle Regole, 1010, 38060, Mattarello (TN), Italy
| | - U Hasson
- Center for Mind/Brain Sciences (CIMeC), University of Trento, via delle Regole, 1010, 38060, Mattarello (TN), Italy; Department of Psychology and Cognitive Sciences, University of Trento, via delle Regole, 1010, 38060, Mattarello (TN), Italy
| |
Collapse
|
45
|
Myers EB, Swan K. Effects of category learning on neural sensitivity to non-native phonetic categories. J Cogn Neurosci 2012; 24:1695-708. [PMID: 22621261 DOI: 10.1162/jocn_a_00243] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Categorical perception, an increased sensitivity to between- compared with within-category contrasts, is a stable property of native speech perception that emerges as language matures. Although recent research suggests that categorical responses to speech sounds can be found in left prefrontal as well as temporo-parietal areas, it is unclear how the neural system develops heightened sensitivity to between-category contrasts. In the current study, two groups of adult participants were trained to categorize speech sounds taken from a dental/retroflex/velar continuum according to two different boundary locations. Behavioral results suggest that for successful learners, categorization training led to increased discrimination accuracy for between-category contrasts with no concomitant increase for within-category contrasts. Neural responses to the learned category schemes were measured using a short-interval habituation design during fMRI scanning. Whereas both inferior frontal and temporal regions showed sensitivity to phonetic contrasts sampled from the continuum, only the bilateral middle frontal gyri exhibited a pattern consistent with encoding of the learned category scheme. Taken together, these results support a view in which top-down information about category membership may reshape perceptual sensitivities via attention or executive mechanisms in the frontal lobes.
Collapse
Affiliation(s)
- Emily B Myers
- Department of Communication Sciences, University of Connecticut, 850 Bolton Rd., Storrs, CT 06269, USA.
| | | |
Collapse
|
46
|
Price CJ. A review and synthesis of the first 20 years of PET and fMRI studies of heard speech, spoken language and reading. Neuroimage 2012; 62:816-47. [PMID: 22584224 PMCID: PMC3398395 DOI: 10.1016/j.neuroimage.2012.04.062] [Citation(s) in RCA: 1284] [Impact Index Per Article: 107.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2011] [Revised: 04/25/2012] [Accepted: 04/30/2012] [Indexed: 01/17/2023] Open
Abstract
The anatomy of language has been investigated with PET or fMRI for more than 20 years. Here I attempt to provide an overview of the brain areas associated with heard speech, speech production and reading. The conclusions of many hundreds of studies were considered, grouped according to the type of processing, and reported in the order that they were published. Many findings have been replicated time and time again leading to some consistent and undisputable conclusions. These are summarised in an anatomical model that indicates the location of the language areas and the most consistent functions that have been assigned to them. The implications for cognitive models of language processing are also considered. In particular, a distinction can be made between processes that are localized to specific structures (e.g. sensory and motor processing) and processes where specialisation arises in the distributed pattern of activation over many different areas that each participate in multiple functions. For example, phonological processing of heard speech is supported by the functional integration of auditory processing and articulation; and orthographic processing is supported by the functional integration of visual processing, articulation and semantics. Future studies will undoubtedly be able to improve the spatial precision with which functional regions can be dissociated but the greatest challenge will be to understand how different brain regions interact with one another in their attempts to comprehend and produce language.
Collapse
Affiliation(s)
- Cathy J Price
- Wellcome Trust Centre for Neuroimaging, UCL, London WC1N 3BG, UK.
| |
Collapse
|
47
|
Swink S, Stuart A. Auditory long latency responses to tonal and speech stimuli. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:447-459. [PMID: 22199192 DOI: 10.1044/1092-4388(2011/10-0364)] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
PURPOSE The effects of type of stimuli (i.e., nonspeech vs. speech), speech (i.e., natural vs. synthetic), gender of speaker and listener, speaker (i.e., self vs. other), and frequency alteration in self-produced speech on the late auditory cortical evoked potential were examined. METHOD Young adult men (n = 15) and women (n = 15), all with normal hearing, participated. P1-N1-P2 components were evoked with the following stimuli: 723-Hz tone bursts; naturally produced male and female /a/ tokens; synthetic male and female /a/ tokens; an /a/ token self-produced by each participant; and the same /a/ token produced by the participant but with a shift in frequency. RESULTS In general, P1-N1-P2 component latencies were significantly shorter when evoked with the tonal stimulus versus speech stimuli and natural versus synthetic speech (p < .05). Women had significantly shorter latencies for only the P2 component (p < .05). For the tonal versus speech stimuli, P1 amplitudes were significantly smaller, and N1 and P2 amplitudes were significantly larger (p < .05). There was no significant effect of gender on the P1, N1, or P2 amplitude (p > .05). CONCLUSION These findings are consistent with the notion that spectrotemporal characteristics of nonspeech and speech stimuli affect P1-N1-P2 latency and amplitude components.
Collapse
|
48
|
Steinberg J, Truckenbrodt H, Jacobsen T. The role of stimulus cross-splicing in an event-related potentials study. Misleading formant transitions hinder automatic phonological processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:3120-3140. [PMID: 22501085 DOI: 10.1121/1.3688515] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
The mental organization of linguistic knowledge and its involvement in speech processing can be investigated using the mismatch negativity (MMN) component of the auditory event-related potential. A contradiction arises, however, between the technical need for strict control of acoustic stimulus properties and the quest for naturalness and acoustic variability of the stimuli. Here, two methods of preparing speech stimulus material were compared. Focussing on the automatic processing of a phonotactic restriction in German, two corresponding sets of various vowel-fricative syllables were used as stimuli. The former syllables were naturally spoken while the latter ones were created by means of cross-splicing. Phonetically, natural and spliced syllables differed with respect to the appropriateness of coarticulatory information about the forthcoming fricative within the vowels. Spliced syllables containing clearly misleading phonetic information were found to elicit larger N2 responses compared to their natural counterparts. Furthermore, MMN results found for the natural syllables could not be replicated with these spliced stimuli. These findings indicate that the automatic processing of the stimuli was considerably affected by the stimulus preparation method. Thus, in spite of its unquestioned benefits for MMN experiments, the splicing technique may lead to interference effects on the linguistic factors under investigation.
Collapse
Affiliation(s)
- Johanna Steinberg
- Institute of Psychology, University of Leipzig, Seeburgstrasse 14-20, D-04103 Leipzig, Germany.
| | | | | |
Collapse
|
49
|
Wilson LB, Tregellas JR, Slason E, Pasko BE, Hepburn S, Rojas DC. Phonological processing in first-degree relatives of individuals with autism: an fMRI study. Hum Brain Mapp 2012; 34:1447-63. [PMID: 22419478 DOI: 10.1002/hbm.22001] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2011] [Revised: 10/17/2011] [Accepted: 11/01/2011] [Indexed: 11/06/2022] Open
Abstract
Autism spectrum disorders (ASD) are complex neurodevelopmental disorders. Twin studies have provided heritability estimates as high as 90% for idiopathic ASD. Further evidence for the spectrum's heritability is provided by the presence of the broad autism phenotype (BAP) in unaffected first-degree relatives. Language ability, specifically phonological processing, is proposed to be a core BAP trait. To date, however, no functional neuroimaging investigations of phonological processing in relatives of individuals with ASD have been undertaken. We conducted a functional magnetic resonance imaging (fMRI) study in parents of children with ASD utilizing a priming task probing implicit phonological processing. In our condition that placed heavier demands on phonological recoding, parents exhibited greater hemodynamic responses than controls in a network of cortical regions involved in phonological processing. Across conditions, parents exhibited enhanced priming-induced response suppression suggesting compensatory neural processing. A nonword repetition test used in previous studies of relatives was also administered. Correlations between this measure and our functional measures also suggested compensatory processing in parents. Regions exhibiting atypical responses in parents included regions previously implicated in the spectrum's language impairments and found to exhibit structural abnormalities in a parent study. These results suggest a possible neurobiological substrate of the phonological deficits proposed to be a core BAP trait. However, these results should be considered preliminary. No previous fMRI study has investigated phonological processing in ASD, so replication is required. Furthermore, interpretation of our fMRI results is limited by the fact that the parent group failed to exhibit behavioral evidence of phonological impairments.
Collapse
Affiliation(s)
- Lisa B Wilson
- Department of Psychiatry, University of Colorado Denver, Aurora, CO 80045, USA
| | | | | | | | | | | |
Collapse
|
50
|
Abstract
Background Auditory sustained responses have been recently suggested to reflect neural processing of speech sounds in the auditory cortex. As periodic fluctuations below the pitch range are important for speech perception, it is necessary to investigate how low frequency periodic sounds are processed in the human auditory cortex. Auditory sustained responses have been shown to be sensitive to temporal regularity but the relationship between the amplitudes of auditory evoked sustained responses and the repetitive rates of auditory inputs remains elusive. As the temporal and spectral features of sounds enhance different components of sustained responses, previous studies with click trains and vowel stimuli presented diverging results. In order to investigate the effect of repetition rate on cortical responses, we analyzed the auditory sustained fields evoked by periodic and aperiodic noises using magnetoencephalography. Results Sustained fields were elicited by white noise and repeating frozen noise stimuli with repetition rates of 5-, 10-, 50-, 200- and 500 Hz. The sustained field amplitudes were significantly larger for all the periodic stimuli than for white noise. Although the sustained field amplitudes showed a rising and falling pattern within the repetition rate range, the response amplitudes to 5 Hz repetition rate were significantly larger than to 500 Hz. Conclusions The enhanced sustained field responses to periodic noises show that cortical sensitivity to periodic sounds is maintained for a wide range of repetition rates. Persistence of periodicity sensitivity below the pitch range suggests that in addition to processing the fundamental frequency of voice, sustained field generators can also resolve low frequency temporal modulations in speech envelope.
Collapse
|