1
|
Casilio M, Kasdan AV, Schneck SM, Entrup JL, Levy DF, Crouch K, Wilson SM. Situating word deafness within aphasia recovery: A case report. Cortex 2024; 173:96-119. [PMID: 38387377 PMCID: PMC11073474 DOI: 10.1016/j.cortex.2023.12.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 10/02/2023] [Accepted: 12/26/2023] [Indexed: 02/24/2024]
Abstract
Word deafness is a rare neurological disorder often observed following bilateral damage to superior temporal cortex and canonically defined as an auditory modality-specific deficit in word comprehension. The extent to which word deafness is dissociable from aphasia remains unclear given its heterogeneous presentation, and some have consequently posited that word deafness instead represents a stage in recovery from aphasia, where auditory and linguistic processing are affected to varying degrees and improve at differing rates. Here, we report a case of an individual (Mr. C) with bilateral temporal lobe lesions whose presentation evolved from a severe aphasia to an atypical form of word deafness, where auditory linguistic processing was impaired at the sentence level and beyond. We first reconstructed in detail Mr. C's stroke recovery through medical record review and supplemental interviewing. Then, using behavioral testing and multimodal neuroimaging, we documented a predominant auditory linguistic deficit in sentence and narrative comprehension-with markedly reduced behavioral performance and absent brain activation in the language network in the spoken modality exclusively. In contrast, Mr. C displayed near-unimpaired behavioral performance and robust brain activations in the language network for the linguistic processing of words, irrespective of modality. We argue that these findings not only support the view of word deafness as a stage in aphasia recovery but also further instantiate the important role of left superior temporal cortex in auditory linguistic processing.
Collapse
Affiliation(s)
| | - Anna V Kasdan
- Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, TN, USA
| | | | | | - Deborah F Levy
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Kelly Crouch
- Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M Wilson
- Vanderbilt University Medical Center, Nashville, TN, USA; School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, QLD, Australia
| |
Collapse
|
2
|
Gonzalez JE, Nieto N, Brusco P, Gravano A, Kamienkowski JE. Speech-induced suppression during natural dialogues. Commun Biol 2024; 7:291. [PMID: 38459110 PMCID: PMC10923813 DOI: 10.1038/s42003-024-05945-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 02/21/2024] [Indexed: 03/10/2024] Open
Abstract
When engaged in a conversation, one receives auditory information from the other's speech but also from their own speech. However, this information is processed differently by an effect called Speech-Induced Suppression. Here, we studied brain representation of acoustic properties of speech in natural unscripted dialogues, using electroencephalography (EEG) and high-quality speech recordings from both participants. Using encoding techniques, we were able to reproduce a broad range of previous findings on listening to another's speech, and achieving even better performances when predicting EEG signal in this complex scenario. Furthermore, we found no response when listening to oneself, using different acoustic features (spectrogram, envelope, etc.) and frequency bands, evidencing a strong effect of SIS. The present work shows that this mechanism is present, and even stronger, during natural dialogues. Moreover, the methodology presented here opens the possibility of a deeper understanding of the related mechanisms in a wider range of contexts.
Collapse
Affiliation(s)
- Joaquin E Gonzalez
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Buenos Aires, Argentina.
| | - Nicolás Nieto
- Instituto de Investigación en Señales, Sistemas e Inteligencia Computacional, sinc(i) (Universidad Nacional del Litoral - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Santa Fe, Argentina
- Instituto de Matemática Aplicada del Litoral, IMAL-UNL/CONICET, Santa Fe, Argentina
| | - Pablo Brusco
- Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
| | - Agustín Gravano
- Laboratorio de Inteligencia Artificial, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- Escuela de Negocios, Universidad Torcuato Di Tella, Buenos Aires, Argentina
- Consejo Nacional de Investigaciones Científicas y Técnicas, Buenos Aires, Argentina
| | - Juan E Kamienkowski
- Laboratorio de Inteligencia Artificial Aplicada, Instituto de Ciencias de la Computación (Universidad de Buenos Aires - Consejo Nacional de Investigaciones Cientificas y Tecnicas), Buenos Aires, Argentina
- Departamento de Computación, Facultad de Ciencias Exactas y Naturales, Universidad de Buenos Aires, Buenos Aires, Argentina
- Maestria de Explotación de Datos y Descubrimiento del Conocimiento, Facultad de Ciencias Exactas y Naturales - Facultad de Ingenieria, Universidad de Buenos Aires, Buenos Aires, Argentina
| |
Collapse
|
3
|
de Lima Xavier L, Hanekamp S, Simonyan K. Sexual Dimorphism Within Brain Regions Controlling Speech Production. Front Neurosci 2019; 13:795. [PMID: 31417351 PMCID: PMC6682624 DOI: 10.3389/fnins.2019.00795] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 07/16/2019] [Indexed: 11/25/2022] Open
Abstract
Neural processing of speech production has been traditionally attributed to the left hemisphere. However, it remains unclear if there are structural bases for speech functional lateralization and if these may be partially explained by sexual dimorphism of cortical morphology. We used a combination of high-resolution MRI and speech-production functional MRI to examine cortical thickness of brain regions involved in speech control in healthy males and females. We identified greater cortical thickness of the left Heschl's gyrus in females compared to males. Additionally, rightward asymmetry of the supramarginal gyrus and leftward asymmetry of the precentral gyrus were found within both male and female groups. Sexual dimorphism of the Heschl's gyrus may underlie known differences in auditory processing for speech production between males and females, whereas findings of asymmetries within cortical areas involved in speech motor execution and planning may contribute to the hemispheric localization of functional activity and connectivity of these regions within the speech production network. Our findings highlight the importance of consideration of sex as a biological variable in studies on neural correlates of speech control.
Collapse
Affiliation(s)
- Laura de Lima Xavier
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Sandra Hanekamp
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| | - Kristina Simonyan
- Department of Otolaryngology Head and Neck Surgery, Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, MA, United States
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, United States
| |
Collapse
|
4
|
Fetal auditory evoked responses to onset of amplitude modulated sounds. A fetal magnetoencephalography (fMEG) study. Hear Res 2018. [DOI: 10.1016/j.heares.2018.03.005] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
5
|
Beetz MJ, Hechavarría JC, Kössl M. Temporal tuning in the bat auditory cortex is sharper when studied with natural echolocation sequences. Sci Rep 2016; 6:29102. [PMID: 27357230 PMCID: PMC4928181 DOI: 10.1038/srep29102] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Accepted: 06/15/2016] [Indexed: 11/09/2022] Open
Abstract
Precise temporal coding is necessary for proper acoustic analysis. However, at cortical level, forward suppression appears to limit the ability of neurons to extract temporal information from natural sound sequences. Here we studied how temporal processing can be maintained in the bats' cortex in the presence of suppression evoked by natural echolocation streams that are relevant to the bats' behavior. We show that cortical neurons tuned to target-distance actually profit from forward suppression induced by natural echolocation sequences. These neurons can more precisely extract target distance information when they are stimulated with natural echolocation sequences than during stimulation with isolated call-echo pairs. We conclude that forward suppression does for time domain tuning what lateral inhibition does for selectivity forms such as auditory frequency tuning and visual orientation tuning. When talking about cortical processing, suppression should be seen as a mechanistic tool rather than a limiting element.
Collapse
Affiliation(s)
- M Jerome Beetz
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
| | - Julio C Hechavarría
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
| | - Manfred Kössl
- Institut für Zellbiologie und Neurowissenschaft, Goethe-Universität, 60438, Frankfurt/M., Germany
| |
Collapse
|
6
|
Tang H, Brock J, Johnson BW. Sound envelope processing in the developing human brain: A MEG study. Clin Neurophysiol 2016; 127:1206-1215. [DOI: 10.1016/j.clinph.2015.07.038] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Revised: 06/15/2015] [Accepted: 07/15/2015] [Indexed: 11/25/2022]
|
7
|
A prolonged maturational time course in brain development for cortical processing of temporal modulations. Clin Neurophysiol 2015; 127:994-998. [PMID: 26480832 DOI: 10.1016/j.clinph.2015.09.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2015] [Revised: 08/27/2015] [Accepted: 09/01/2015] [Indexed: 11/21/2022]
|
8
|
Abstract
Auditory agnosia refers to impairments in sound perception and identification despite intact hearing, cognitive functioning, and language abilities (reading, writing, and speaking). Auditory agnosia can be general, affecting all types of sound perception, or can be (relatively) specific to a particular domain. Verbal auditory agnosia (also known as (pure) word deafness) refers to deficits specific to speech processing, environmental sound agnosia refers to difficulties confined to non-speech environmental sounds, and amusia refers to deficits confined to music. These deficits can be apperceptive, affecting basic perceptual processes, or associative, affecting the relation of a perceived auditory object to its meaning. This chapter discusses what is known about the behavioral symptoms and lesion correlates of these different types of auditory agnosia (focusing especially on verbal auditory agnosia), evidence for the role of a rapid temporal processing deficit in some aspects of auditory agnosia, and the few attempts to treat the perceptual deficits associated with auditory agnosia. A clear picture of auditory agnosia has been slow to emerge, hampered by the considerable heterogeneity in behavioral deficits, associated brain damage, and variable assessments across cases. Despite this lack of clarity, these striking deficits in complex sound processing continue to inform our understanding of auditory perception and cognition.
Collapse
Affiliation(s)
- L Robert Slevc
- Department of Psychology, University of Maryland, College Park, MD, USA.
| | - Alison R Shell
- Department of Psychology, University of Maryland, College Park, MD, USA
| |
Collapse
|
9
|
Rocha-Muniz CN, Zachi EC, Teixeira RAA, Ventura DF, Befi DM, Schochat E. Association between language development and auditory processing disorders1. Braz J Otorhinolaryngol 2014; 80:231-6. [PMID: 25153108 PMCID: PMC9535489 DOI: 10.1016/j.bjorl.2014.01.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2013] [Accepted: 01/24/2014] [Indexed: 11/28/2022] Open
Abstract
Introduction It is crucial to understand the complex processing of acoustic stimuli along the auditory pathway; comprehension of this complex processing can facilitate our understanding of the processes that underlie normal and altered human communication. Aim To investigate the performance and lateralization effects on auditory processing assessment in children with specific language impairment (SLI), relating these findings to those obtained in children with auditory processing disorder (APD) and typical development (TD). Material and methods Prospective study. Seventy-five children, aged 6-12 years, were separated in three groups: 25 children with SLI, 25 children with APD, and 25 children with TD. All went through the following tests: speech-in-noise test, Dichotic Digit test and Pitch Pattern Sequencing test. Results The effects of lateralization were observed only in the SLI group, with the left ear presenting much lower scores than those presented to the right ear. The inter-group analysis has shown that in all tests children from APD and SLI groups had significantly poorer performance compared to TD group. Moreover, SLI group presented worse results than APD group. Conclusion This study has shown, in children with SLI, an inefficient processing of essential sound components and an effect of lateralization. These findings may indicate that neural processes (required for auditory processing) are different between auditory processing and speech disorders.
Collapse
|
10
|
Nourski KV, Brugge JF, Reale RA, Kovach CK, Oya H, Kawasaki H, Jenison RL, Howard MA. Coding of repetitive transients by auditory cortex on posterolateral superior temporal gyrus in humans: an intracranial electrophysiology study. J Neurophysiol 2012; 109:1283-95. [PMID: 23236002 DOI: 10.1152/jn.00718.2012] [Citation(s) in RCA: 52] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Evidence regarding the functional subdivisions of human auditory cortex has been slow to converge on a definite model. In part, this reflects inadequacies of current understanding of how the cortex represents temporal information in acoustic signals. To address this, we investigated spatiotemporal properties of auditory responses in human posterolateral superior temporal (PLST) gyrus to acoustic click-train stimuli using intracranial recordings from neurosurgical patients. Subjects were patients undergoing chronic invasive monitoring for refractory epilepsy. The subjects listened passively to acoustic click-train stimuli of varying durations (160 or 1,000 ms) and rates (4-200 Hz), delivered diotically via insert earphones. Multicontact subdural grids placed over the perisylvian cortex recorded intracranial electrocorticographic responses from PLST and surrounding areas. Analyses focused on averaged evoked potentials (AEPs) and high gamma (70-150 Hz) event-related band power (ERBP). Responses to click trains featured prominent AEP waveforms and increases in ERBP. The magnitude of AEPs and ERBP typically increased with click rate. Superimposed on the AEPs were frequency-following responses (FFRs), most prominent at 50-Hz click rates but still detectable at stimulus rates up to 200 Hz. Loci with the largest high gamma responses on PLST were often different from those sites that exhibited the strongest FFRs. The data indicate that responses of non-core auditory cortex of PLST represent temporal stimulus features in multiple ways. These include an isomorphic representation of periodicity (as measured by the FFR), a representation based on increases in non-phase-locked activity (as measured by high gamma ERBP), and spatially distributed patterns of activity.
Collapse
Affiliation(s)
- Kirill V Nourski
- Dept. of Neurosurgery, The Univ. of Iowa, Iowa City, IA 52242, USA.
| | | | | | | | | | | | | | | |
Collapse
|
11
|
Robson H, Grube M, Lambon Ralph MA, Griffiths TD, Sage K. Fundamental deficits of auditory perception in Wernicke's aphasia. Cortex 2012; 49:1808-22. [PMID: 23351849 DOI: 10.1016/j.cortex.2012.11.012] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2012] [Revised: 07/30/2012] [Accepted: 11/27/2012] [Indexed: 10/27/2022]
Abstract
OBJECTIVE This work investigates the nature of the comprehension impairment in Wernicke's aphasia (WA), by examining the relationship between deficits in auditory processing of fundamental, non-verbal acoustic stimuli and auditory comprehension. WA, a condition resulting in severely disrupted auditory comprehension, primarily occurs following a cerebrovascular accident (CVA) to the left temporo-parietal cortex. Whilst damage to posterior superior temporal areas is associated with auditory linguistic comprehension impairments, functional-imaging indicates that these areas may not be specific to speech processing but part of a network for generic auditory analysis. METHODS We examined analysis of basic acoustic stimuli in WA participants (n = 10) using auditory stimuli reflective of theories of cortical auditory processing and of speech cues. Auditory spectral, temporal and spectro-temporal analysis was assessed using pure-tone frequency discrimination, frequency modulation (FM) detection and the detection of dynamic modulation (DM) in "moving ripple" stimuli. All tasks used criterion-free, adaptive measures of threshold to ensure reliable results at the individual level. RESULTS Participants with WA showed normal frequency discrimination but significant impairments in FM and DM detection, relative to age- and hearing-matched controls at the group level (n = 10). At the individual level, there was considerable variation in performance, and thresholds for both FM and DM detection correlated significantly with auditory comprehension abilities in the WA participants. CONCLUSION These results demonstrate the co-occurrence of a deficit in fundamental auditory processing of temporal and spectro-temporal non-verbal stimuli in WA, which may have a causal contribution to the auditory language comprehension impairment. Results are discussed in the context of traditional neuropsychology and current models of cortical auditory processing.
Collapse
Affiliation(s)
- Holly Robson
- Neuroscience and Aphasia Research Unit, University of Manchester, UK; Psychology and Clinical Language Sciences, University of Reading, UK.
| | | | | | | | | |
Collapse
|
12
|
Johnson JS, Yin P, O'Connor KN, Sutter ML. Ability of primary auditory cortical neurons to detect amplitude modulation with rate and temporal codes: neurometric analysis. J Neurophysiol 2012; 107:3325-41. [PMID: 22422997 DOI: 10.1152/jn.00812.2011] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Amplitude modulation (AM) is a common feature of natural sounds, and its detection is biologically important. Even though most sounds are not fully modulated, the majority of physiological studies have focused on fully modulated (100% modulation depth) sounds. We presented AM noise at a range of modulation depths to awake macaque monkeys while recording from neurons in primary auditory cortex (A1). The ability of neurons to detect partial AM with rate and temporal codes was assessed with signal detection methods. On average, single-cell synchrony was as or more sensitive than spike count in modulation detection. Cells are less sensitive to modulation depth if tested away from their best modulation frequency, particularly for temporal measures. Mean neural modulation detection thresholds in A1 are not as sensitive as behavioral thresholds, but with phase locking the most sensitive neurons are more sensitive, suggesting that for temporal measures the lower-envelope principle cannot account for thresholds. Three methods of preanalysis pooling of spike trains (multiunit, similar to convergence from a cortical column; within cell, similar to convergence of cells with matched response properties; across cell, similar to indiscriminate convergence of cells) all result in an increase in neural sensitivity to modulation depth for both temporal and rate codes. For the across-cell method, pooling of a few dozen cells can result in detection thresholds that approximate those of the behaving animal. With synchrony measures, indiscriminate pooling results in sensitive detection of modulation frequencies between 20 and 60 Hz, suggesting that differences in AM response phase are minor in A1.
Collapse
Affiliation(s)
- Jeffrey S Johnson
- Center for Neuroscience, Univ. of California at Davis, Davis, CA 95618, USA
| | | | | | | |
Collapse
|
13
|
Nourski KV, Brugge JF. Representation of temporal sound features in the human auditory cortex. Rev Neurosci 2011; 22:187-203. [PMID: 21476940 DOI: 10.1515/rns.2011.016] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Temporal information in acoustic signals is important for the perception of environmental sounds, including speech. This review focuses on several aspects of temporal processing within human auditory cortex and its relevance for the processing of speech sounds. Periodic non-speech sounds, such as trains of acoustic clicks and bursts of amplitude-modulated noise or tones, can elicit different percepts depending on the pulse repetition rate or modulation frequency. Such sounds provide convenient methodological tools to study representation of timing information in the auditory system. At low repetition rates of up to 8-10 Hz, each individual stimulus (a single click or a sinusoidal amplitude modulation cycle) within the sequence is perceived as a separate event. As repetition rates increase up to and above approximately 40 Hz, these events blend together, giving rise first to the percept of flutter and then to pitch. The extent to which neural responses of human auditory cortex encode temporal features of acoustic stimuli is discussed within the context of these perceptual classes of periodic stimuli and their relationship to speech sounds. Evidence for neural coding of temporal information at the level of the core auditory cortex in humans suggests possible physiological counterparts to perceptual categorical boundaries for periodic acoustic stimuli. Temporal coding is less evident in auditory cortical fields beyond the core. Finally, data suggest hemispheric asymmetry in temporal cortical processing.
Collapse
Affiliation(s)
- Kirill V Nourski
- Human Brain Research Laboratory, Department of Neurosurgery, The University of Iowa, 200 Hawkins Dr., Iowa City, IA 52242, USA.
| | | |
Collapse
|
14
|
Billiet CR, Bellis TJ. The relationship between brainstem temporal processing and performance on tests of central auditory function in children with reading disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2011; 54:228-242. [PMID: 20689038 DOI: 10.1044/1092-4388(2010/09-0239)] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE Studies using speech stimuli to elicit electrophysiologic responses have found approximately 30% of children with language-based learning problems demonstrate abnormal brainstem timing. Research is needed regarding how these responses relate to performance on behavioral tests of central auditory function. The purpose of the study was to investigate performance of children with dyslexia with and without abnormal brainstem timing and children with no history of learning or related disorders on behavioral tests of central auditory function. METHOD Performance of 30 school-age children on behavioral central auditory tests in common clinical use was examined: Group 1 (n = 10): dyslexia, abnormal brainstem timing; Group 2 (n = 10): dyslexia, normal brainstem timing; Group 3 (n = 10): typical controls. RESULTS Results indicated that all participants in Group 2 met diagnostic criteria for (central) auditory processing disorder [(C)APD], whereas only 4 participants in Group 1 met criteria. The Biological Marker of Auditory Processing (BioMARK) identified 6 children in Group 1 who did not meet diagnostic criteria for (C)APD but displayed abnormal brainstem timing. CONCLUSIONS Results underscore the importance of central auditory assessment for children with dyslexia. Furthermore, the BioMARK may be useful in identifying children with central auditory dysfunction who would not have been identified using behavioral methods of (C)APD assessment.
Collapse
|
15
|
Metherate R. Functional connectivity and cholinergic modulation in auditory cortex. Neurosci Biobehav Rev 2010; 35:2058-63. [PMID: 21144860 DOI: 10.1016/j.neubiorev.2010.11.010] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/04/2010] [Revised: 11/08/2010] [Accepted: 11/26/2010] [Indexed: 11/26/2022]
Abstract
Although it is known that primary auditory cortex (A1) contributes to the processing and perception of sound, its precise functions and the underlying mechanisms are not well understood. Recent studies point to a remarkably broad spectral range of largely subthreshold inputs to individual neurons in A1--seemingly encompassing, in some cases, the entire audible spectrum--as evidence for potential, and potentially unique, cortical functions. We have proposed a general mechanism for spectral integration by which information converges on neurons in A1 via a combination of thalamocortical pathways and intracortical long-distance, "horizontal", pathways. Here, this proposal is briefly reviewed and updated with results from multiple laboratories. Since spectral integration in A1 is dynamically regulated, we also show how one regulatory mechanism--modulation by the neurotransmitter acetylcholine (ACh)--could act within the hypothesized framework to alter integration in single neurons. The results of these studies promote a cellular understanding of information processing in A1.
Collapse
Affiliation(s)
- Raju Metherate
- Department of Neurobiology and Behavior, Center for Hearing Research, University of California-Irvine, CA 92697-4550, United States.
| |
Collapse
|
16
|
Speech perception, rapid temporal processing, and the left hemisphere: a case study of unilateral pure word deafness. Neuropsychologia 2010; 49:216-30. [PMID: 21093464 DOI: 10.1016/j.neuropsychologia.2010.11.009] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2010] [Revised: 11/08/2010] [Accepted: 11/09/2010] [Indexed: 12/31/2022]
Abstract
The mechanisms and functional anatomy underlying the early stages of speech perception are still not well understood. One way to investigate the cognitive and neural underpinnings of speech perception is by investigating patients with speech perception deficits but with preserved ability in other domains of language. One such case is reported here: patient NL shows highly impaired speech perception despite normal hearing ability and preserved semantic knowledge, speaking, and reading ability, and is thus classified as a case of pure word deafness (PWD). NL has a left temporoparietal lesion without right hemisphere damage and DTI imaging suggests that he has preserved cross-hemispheric connectivity, arguing against an account of PWD as a disconnection of left lateralized language areas from auditory input. Two experiments investigated whether NL's speech perception deficit could instead result from an underlying problem with rapid temporal processing. Experiment 1 showed that NL has particular difficulty discriminating sounds that differ in terms of rapid temporal changes, be they speech or non-speech sounds. Experiment 2 employed an intensive training program designed to improve rapid temporal processing in language impaired children (Fast ForWord; Scientific Learning Corporation, Oakland, CA) and found that NL was able to improve his ability to discriminate rapid temporal differences in non-speech sounds, but not in speech sounds. Overall, these data suggest that patients with unilateral PWD may, in fact, have a deficit in (left lateralized) temporal processing ability, however they also show that a rapid temporal processing deficit is, by itself, unable to account for this patient's speech perception deficit.
Collapse
|
17
|
Rybalko N, Šuta D, Popelář J, Syka J. Inactivation of the left auditory cortex impairs temporal discrimination in the rat. Behav Brain Res 2010; 209:123-30. [DOI: 10.1016/j.bbr.2010.01.028] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2009] [Revised: 01/19/2010] [Accepted: 01/20/2010] [Indexed: 11/30/2022]
|
18
|
|
19
|
Abstract
Speech comprehension relies on temporal cues contained in the speech envelope, and the auditory cortex has been implicated as playing a critical role in encoding this temporal information. We investigated auditory cortical responses to speech stimuli in subjects undergoing invasive electrophysiological monitoring for pharmacologically refractory epilepsy. Recordings were made from multicontact electrodes implanted in Heschl's gyrus (HG). Speech sentences, time compressed from 0.75 to 0.20 of natural speaking rate, elicited average evoked potentials (AEPs) and increases in event-related band power (ERBP) of cortical high-frequency (70-250 Hz) activity. Cortex of posteromedial HG, the presumed core of human auditory cortex, represented the envelope of speech stimuli in the AEP and ERBP. Envelope following in ERBP, but not in AEP, was evident in both language-dominant and -nondominant hemispheres for relatively high degrees of compression where speech was not comprehensible. Compared to posteromedial HG, responses from anterolateral HG-an auditory belt field-exhibited longer latencies, lower amplitudes, and little or no time locking to the speech envelope. The ability of the core auditory cortex to follow the temporal speech envelope over a wide range of speaking rates leads us to conclude that such capacity in itself is not a limiting factor for speech comprehension.
Collapse
|
20
|
Abstract
Fluctuations in the temporal durations of sensory signals constitute a major source of variability within natural stimulus ensembles. The neuronal mechanisms through which sensory systems can stabilize perception against such fluctuations are largely unknown. An intriguing instantiation of such robustness occurs in human speech perception, which relies critically on temporal acoustic cues that are embedded in signals with highly variable duration. Across different instances of natural speech, auditory cues can undergo temporal warping that ranges from 2-fold compression to 2-fold dilation without significant perceptual impairment. Here, we report that time-warp-invariant neuronal processing can be subserved by the shunting action of synaptic conductances that automatically rescales the effective integration time of postsynaptic neurons. We propose a novel spike-based learning rule for synaptic conductances that adjusts the degree of synaptic shunting to the temporal processing requirements of a given task. Applying this general biophysical mechanism to the example of speech processing, we propose a neuronal network model for time-warp-invariant word discrimination and demonstrate its excellent performance on a standard benchmark speech-recognition task. Our results demonstrate the important functional role of synaptic conductances in spike-based neuronal information processing and learning. The biophysics of temporal integration at neuronal membranes can endow sensory pathways with powerful time-warp-invariant computational capabilities.
Collapse
Affiliation(s)
- Robert Gütig
- Racah Institute of Physics, Hebrew University, Jerusalem, Israel.
| | | |
Collapse
|
21
|
Abrams DA, Nicol T, Zecker S, Kraus N. Abnormal cortical processing of the syllable rate of speech in poor readers. J Neurosci 2009; 29:7686-93. [PMID: 19535580 PMCID: PMC2763585 DOI: 10.1523/jneurosci.5242-08.2009] [Citation(s) in RCA: 116] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2008] [Revised: 04/24/2009] [Accepted: 04/25/2009] [Indexed: 11/21/2022] Open
Abstract
Children with reading impairments have long been associated with impaired perception for rapidly presented acoustic stimuli and recently have shown deficits for slower features. It is not known whether impairments for low-frequency acoustic features negatively impact processing of speech in reading-impaired individuals. Here we provide neurophysiological evidence that poor readers have impaired representation of the speech envelope, the acoustical cue that provides syllable pattern information in speech. We measured cortical-evoked potentials in response to sentence stimuli and found that good readers indicated consistent right-hemisphere dominance in auditory cortex for all measures of speech envelope representation, including the precision, timing, and magnitude of cortical responses. Poor readers showed abnormal patterns of cerebral asymmetry for all measures of speech envelope representation. Moreover, cortical measures of speech envelope representation predicted up to 41% of the variability in standardized reading scores and 50% in measures of phonological processing across a wide range of abilities. Our findings strongly support a relationship between acoustic-level processing and higher-level language abilities, and are the first to link reading ability with cortical processing of low-frequency acoustic features in the speech signal. Our results also support the hypothesis that asymmetric routing between cerebral hemispheres represents an important mechanism for temporal encoding in the human auditory system, and the need for an expansion of the temporal processing hypothesis for reading disabilities to encompass impairments for a wider range of speech features than previously acknowledged.
Collapse
Affiliation(s)
- Daniel A Abrams
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois 60208, USA.
| | | | | | | |
Collapse
|
22
|
Poulsen C, Picton TW, Paus T. Age-related changes in transient and oscillatory brain responses to auditory stimulation during early adolescence. Dev Sci 2009; 12:220-35. [DOI: 10.1111/j.1467-7687.2008.00760.x] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
23
|
Sakai M, Chimoto S, Qin L, Sato Y. Neural mechanisms of interstimulus interval-dependent responses in the primary auditory cortex of awake cats. BMC Neurosci 2009; 10:10. [PMID: 19208233 PMCID: PMC2679037 DOI: 10.1186/1471-2202-10-10] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2008] [Accepted: 02/10/2009] [Indexed: 11/27/2022] Open
Abstract
Background Primary auditory cortex (AI) neurons show qualitatively distinct response features to successive acoustic signals depending on the inter-stimulus intervals (ISI). Such ISI-dependent AI responses are believed to underlie, at least partially, categorical perception of click trains (elemental vs. fused quality) and stop consonant-vowel syllables (eg.,/da/-/ta/continuum). Methods Single unit recordings were conducted on 116 AI neurons in awake cats. Rectangular clicks were presented either alone (single click paradigm) or in a train fashion with variable ISI (2–480 ms) (click-train paradigm). Response features of AI neurons were quantified as a function of ISI: one measure was related to the degree of stimulus locking (temporal modulation transfer function [tMTF]) and another measure was based on firing rate (rate modulation transfer function [rMTF]). An additional modeling study was performed to gain insight into neurophysiological bases of the observed responses. Results In the click-train paradigm, the majority of the AI neurons ("synchronization type"; n = 72) showed stimulus-locking responses at long ISIs. The shorter cutoff ISI for stimulus-locking responses was on average ~30 ms and was level tolerant in accordance with the perceptual boundary of click trains and of consonant-vowel syllables. The shape of tMTF of those neurons was either band-pass or low-pass. The single click paradigm revealed, at maximum, four response periods in the following order: 1st excitation, 1st suppression, 2nd excitation then 2nd suppression. The 1st excitation and 1st suppression was found exclusively in the synchronization type, implying that the temporal interplay between excitation and suppression underlies stimulus-locking responses. Among these neurons, those showing the 2nd suppression had band-pass tMTF whereas those with low-pass tMTF never showed the 2nd suppression, implying that tMTF shape is mediated through the 2nd suppression. The recovery time course of excitability suggested the involvement of short-term plasticity. The observed phenomena were well captured by a single cell model which incorporated AMPA, GABAA, NMDA and GABAB receptors as well as short-term plasticity of thalamocortical synaptic connections. Conclusion Overall, it was suggested that ISI-dependent responses of the majority of AI neurons are configured through the temporal interplay of excitation and suppression (inhibition) along with short-term plasticity.
Collapse
Affiliation(s)
- Masashi Sakai
- Department of Physiology, Interdisciplinary Graduate School of Medicine and Engineering, University of Yamanashi, Yamanashi, Japan.
| | | | | | | |
Collapse
|
24
|
Bellis TJ, Anzalone AM. Intervention Approaches for Individuals With (Central) Auditory Processing Disorder. ACTA ACUST UNITED AC 2008. [DOI: 10.1044/cicsd_35_f_143] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
25
|
Zatorre RJ, Gandour JT. Neural specializations for speech and pitch: moving beyond the dichotomies. Philos Trans R Soc Lond B Biol Sci 2008; 363:1087-104. [PMID: 17890188 PMCID: PMC2606798 DOI: 10.1098/rstb.2007.2161] [Citation(s) in RCA: 239] [Impact Index Per Article: 14.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
The idea that speech processing relies on unique, encapsulated, domain-specific mechanisms has been around for some time. Another well-known idea, often espoused as being in opposition to the first proposal, is that processing of speech sounds entails general-purpose neural mechanisms sensitive to the acoustic features that are present in speech. Here, we suggest that these dichotomous views need not be mutually exclusive. Specifically, there is now extensive evidence that spectral and temporal acoustical properties predict the relative specialization of right and left auditory cortices, and that this is a parsimonious way to account not only for the processing of speech sounds, but also for non-speech sounds such as musical tones. We also point out that there is equally compelling evidence that neural responses elicited by speech sounds can differ depending on more abstract, linguistically relevant properties of a stimulus (such as whether it forms part of one's language or not). Tonal languages provide a particularly valuable window to understand the interplay between these processes. The key to reconciling these phenomena probably lies in understanding the interactions between afferent pathways that carry stimulus information, with top-down processing mechanisms that modulate these processes. Although we are still far from the point of having a complete picture, we argue that moving forward will require us to abandon the dichotomy argument in favour of a more integrated approach.
Collapse
Affiliation(s)
- Robert J Zatorre
- Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, H3A 3B4 Quebec, Canada.
| | | |
Collapse
|
26
|
Abrams DA, Nicol T, Zecker S, Kraus N. Right-hemisphere auditory cortex is dominant for coding syllable patterns in speech. J Neurosci 2008; 28:3958-65. [PMID: 18400895 PMCID: PMC2713056 DOI: 10.1523/jneurosci.0187-08.2008] [Citation(s) in RCA: 191] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2007] [Accepted: 02/16/2008] [Indexed: 11/21/2022] Open
Abstract
Cortical analysis of speech has long been considered the domain of left-hemisphere auditory areas. A recent hypothesis poses that cortical processing of acoustic signals, including speech, is mediated bilaterally based on the component rates inherent to the speech signal. In support of this hypothesis, previous studies have shown that slow temporal features (3-5 Hz) in nonspeech acoustic signals lateralize to right-hemisphere auditory areas, whereas rapid temporal features (20-50 Hz) lateralize to the left hemisphere. These results were obtained using nonspeech stimuli, and it is not known whether right-hemisphere auditory cortex is dominant for coding the slow temporal features in speech known as the speech envelope. Here we show strong right-hemisphere dominance for coding the speech envelope, which represents syllable patterns and is critical for normal speech perception. Right-hemisphere auditory cortex was 100% more accurate in following contours of the speech envelope and had a 33% larger response magnitude while following the envelope compared with the left hemisphere. Asymmetries were evident regardless of the ear of stimulation despite dominance of contralateral connections in ascending auditory pathways. Results provide evidence that the right hemisphere plays a specific and important role in speech processing and support the hypothesis that acoustic processing of speech involves the decomposition of the signal into constituent temporal features by rate-specialized neurons in right- and left-hemisphere auditory cortex.
Collapse
Affiliation(s)
- Daniel A Abrams
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois 60208, USA.
| | | | | | | |
Collapse
|
27
|
Hyde KL, Peretz I, Zatorre RJ. Evidence for the role of the right auditory cortex in fine pitch resolution. Neuropsychologia 2008; 46:632-9. [DOI: 10.1016/j.neuropsychologia.2007.09.004] [Citation(s) in RCA: 175] [Impact Index Per Article: 10.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2007] [Revised: 09/04/2007] [Accepted: 09/05/2007] [Indexed: 12/01/2022]
|
28
|
Meyer M. Functions of the left and right posterior temporal lobes during segmental and suprasegmental speech perception. ZEITSCHRIFT FUR NEUROPSYCHOLOGIE 2008. [DOI: 10.1024/1016-264x.19.2.101] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This manuscript reviews evidence from neuroimaging studies on elementary processes of speech perception and their implications for our understanding of the brain-speech relationship. Essentially, differential preferences of the left and right auditory-related cortex for rapidly and slowly changing acoustic cues that constitute (sub)segmental and suprasegmental parameters, e. g. phonemes, prosody, and rhythm. The adopted parameter-based research approach takes the early stages of speech perception as being of fundamental relevance for simple as well as complex language functions. The current state of knowledge necessitates an extensive revision of the classical neurologically oriented model of language processing that was aimed at identifying the neural correlates of linguistic components (e. g. phonology, syntax and semantics) more than at substantiating the importance of (supra)segmental information during speech perception.
Collapse
Affiliation(s)
- Martin Meyer
- Institute of Neuropsychology, University of Zurich
| |
Collapse
|
29
|
Jörgens S, Biermann-Ruben K, Kurz MW, Flügel C, Daehli Kurz K, Antke C, Hartung HP, Seitz RJ, Schnitzler A. Word deafness as a cortical auditory processing deficit: a case report with MEG. Neurocase 2008; 14:307-16. [PMID: 18766983 DOI: 10.1080/13554790802363738] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Pure word deafness is a rare disorder dramatically impairing comprehension of spoken language, while auditory functions remain relatively intact. We present a 71-year-old woman with a slowly progressive disturbance of speech perception due to pure word deafness. MRI revealed degeneration of the temporal lobes. A magnetoencephalographic investigation using alternating single tone stimulation showed that N100 was followed by a second transient response and was abnormally prolonged up to 600-700 ms. We conclude that auditory processing is disturbed at long latency ranges following the N100, which may result in the clinical presentation of pure word deafness.
Collapse
Affiliation(s)
- Silke Jörgens
- Department of Neurology, University Hospital, Düsseldorf, Germany.
| | | | | | | | | | | | | | | | | |
Collapse
|
30
|
The contribution of rapid visual and auditory processing to the reading of irregular words and pseudowords presented singly and in contiguity. ACTA ACUST UNITED AC 2007; 69:1344-59. [PMID: 18078226 DOI: 10.3758/bf03192951] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
31
|
Xu H, Kotak VC, Sanes DH. Conductive hearing loss disrupts synaptic and spike adaptation in developing auditory cortex. J Neurosci 2007; 27:9417-26. [PMID: 17728455 PMCID: PMC6673134 DOI: 10.1523/jneurosci.1992-07.2007] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Although sensorineural hearing loss (SNHL) is known to compromise central auditory structure and function, the impact of milder forms of hearing loss on cellular neurophysiology remains mostly undefined. We induced conductive hearing loss (CHL) in developing gerbils, reared the animals for 8-13 d, and subsequently assessed the temporal features of auditory cortex layer 2/3 pyramidal neurons in a thalamocortical brain slice preparation with whole-cell recordings. Repetitive stimulation of the ventral medial geniculate nucleus (MGv) evoked robust short-term depression of the postsynaptic potentials in control neurons, and this depression increased monotonically at higher stimulation frequencies. In contrast, CHL neurons displayed a faster rate of synaptic depression and a smaller asymptotic amplitude. Moreover, the latency of MGv evoked potentials was consistently longer in CHL neurons for all stimulus rates. A separate assessment of spike frequency adaptation in response to trains of injected current pulses revealed that CHL neurons displayed less adaptation compared with controls, although there was an increase in temporal jitter. For each of these properties, nearly identical findings were observed for SNHL neurons. Together, these data show that CHL significantly alters the temporal properties of auditory cortex synapses and spikes, and this may contribute to processing deficits that attend mild to moderate hearing loss.
Collapse
Affiliation(s)
- Han Xu
- Center for Neural Science and
| | | | - Dan H. Sanes
- Center for Neural Science and
- Department of Biology, New York University, New York, New York 10003
| |
Collapse
|
32
|
Ter-Mikaelian M, Sanes DH, Semple MN. Transformation of temporal properties between auditory midbrain and cortex in the awake Mongolian gerbil. J Neurosci 2007; 27:6091-102. [PMID: 17553982 PMCID: PMC6672143 DOI: 10.1523/jneurosci.4848-06.2007] [Citation(s) in RCA: 133] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The neural representation of meaningful stimulus features is thought to rely on precise discharge characteristics of the auditory cortex. Precisely timed onset spikes putatively carry the majority of stimulus-related information in auditory cortical neurons but make a small contribution to stimulus representation in the auditory midbrain. Because these conclusions derive primarily from anesthetized preparations, we reexamined temporal coding properties of single neurons in the awake gerbil inferior colliculus (IC) and compared them with primary auditory cortex (AI). Surprisingly, AI neurons displayed a reduction of temporal precision compared with those in the IC. Furthermore, this hierarchical transition from high to low temporal fidelity was observed for both static and dynamic stimuli. Because most of the data that support temporal precision were obtained under anesthesia, we also reexamined response properties of IC and AI neurons under these conditions. Our results show that anesthesia has profound effects on the trial-to-trial variability and reliability of discharge and significantly improves the temporal precision of AI neurons to both tones and amplitude-modulated stimuli. In contrast, IC temporal properties are only mildly affected by anesthesia. These results underscore the pitfalls of using anesthetized preparations to study temporal coding. Our findings in awake animals reveal that AI neurons combine faster adaptation kinetics and a longer temporal window than evident in IC to represent ongoing acoustic stimuli.
Collapse
Affiliation(s)
| | - Dan H. Sanes
- Center for Neural Science and
- Departments of Biology and
| | - Malcolm N. Semple
- Center for Neural Science and
- Psychology, New York University, New York, New York 10003
| |
Collapse
|
33
|
Cooke JE, Zhang H, Kelly JB. Detection of sinusoidal amplitude modulated sounds: deficits after bilateral lesions of auditory cortex in the rat. Hear Res 2007; 231:90-9. [PMID: 17629425 DOI: 10.1016/j.heares.2007.06.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/10/2006] [Revised: 05/30/2007] [Accepted: 06/04/2007] [Indexed: 10/23/2022]
Abstract
The ability of rats to detect the presence of sinusoidal amplitude modulation (AM) of a broadband noise carrier was determined before and after bilateral ablation of auditory cortex. The rats were trained to withdraw from a drinking spout to avoid a shock when they detected a modulation of the sound. Sensitivity was evaluated by testing the rats at progressively smaller depths of modulation. Psychophysical curves were produced to describe the limits of detection at modulation rates of 10, 100 and 1000Hz. Performance scores were based on the probability of withdrawal from the spout during AM (warning periods) relative to withdrawal during the un-modulated noise (safe periods). A threshold was defined as the depth of modulation that produced a score halfway between perfect avoidance and no avoidance (performance score=0.5). Bilateral auditory cortical lesions resulted in significant elevations in threshold for detection of AM at rates of 100 and 1000Hz. No significant shift was found at a modulation rate of 10Hz. The magnitude of the deficit for AM rates of 100 and 1000Hz was positively correlated with the size of the cortical lesion. Substantial deficits were found only in animals with lesions that included secondary as well as primary auditory cortical areas. The results show that the rat's auditory cortex is important for processing sinusoidal AM and that its contribution is most apparent at high modulation rates. The data suggest that the auditory cortex is a crucial structure for maintaining normal sensitivity to temporal modulation of an auditory stimulus.
Collapse
Affiliation(s)
- James E Cooke
- Laboratory of Sensory Neuroscience, Department of Psychology, Carleton University, Ottawa, Ontario, Canada
| | | | | |
Collapse
|
34
|
Poulsen C, Picton TW, Paus T. Age-related changes in transient and oscillatory brain responses to auditory stimulation in healthy adults 19-45 years old. Cereb Cortex 2006; 17:1454-67. [PMID: 16916887 DOI: 10.1093/cercor/bhl056] [Citation(s) in RCA: 52] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The capacity of the human cerebral cortex to track fast temporal changes in auditory stimuli is related to the development of language in children and to deficits in speech perception in the elderly. Although maturation of temporal processing in children and its deterioration in the elderly has been investigated previously, little is known about naturally occurring changes in auditory temporal processing between these limits. The present study examined age-related (19-45 years) changes in 3 electrophysiological measures of auditory processing: 1) the late transient auditory evoked potentials to tone onset, 2) the auditory steady-state response (ASSR) to a 40-Hz frequency-modulated tone, and 3) the envelope following response (EFR) to sweeps of amplitude-modulated white noise from 10 to 100 Hz. With increasing age, the latency of the auditory P1-N1 complex decreased, the oscillatory (ASSR) response became larger and more stable, and the resonant peak of the EFR increased from 38 Hz at 19 years to 46 Hz at 45 years. Source analysis localized these changes to the auditory regions of the temporal lobe. These results indicate persistent adaptation of cortical auditory processes into middle adulthood. We speculate that experience-driven myelination and/or refinement of inhibitory circuits may underlie these changes.
Collapse
Affiliation(s)
- Catherine Poulsen
- Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, 3801 University Street, Montreal, Canada.
| | | | | |
Collapse
|
35
|
Firszt JB, Ulmer JL, Gaggl W. Differential representation of speech sounds in the human cerebral hemispheres. THE ANATOMICAL RECORD. PART A, DISCOVERIES IN MOLECULAR, CELLULAR, AND EVOLUTIONARY BIOLOGY 2006; 288:345-57. [PMID: 16550560 PMCID: PMC3780356 DOI: 10.1002/ar.a.20295] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Various methods in auditory neuroscience have been used to gain knowledge about the structure and function of the human auditory cortical system. Regardless of method, hemispheric differences are evident in the normal processing of speech sounds. This review article, augmented by the authors' own work, provides evidence that asymmetries exist in both cortical and subcortical structures of the human auditory system. Asymmetries are affected by stimulus type, for example, hemispheric activation patterns have been shown to change from right to left cortex as stimuli change from speech to nonspeech. In addition, the presence of noise has differential effects on the contribution of the two hemispheres. Modifications of typical asymmetric cortical patterns occur when pathology is present, as in hearing loss or tinnitus. We show that in response to speech sounds, individuals with unilateral hearing loss lose the normal asymmetric pattern due to both a decrease in contralateral hemispheric activity and an increase in the ipsilateral hemisphere. These studies demonstrate the utility of modern neuroimaging techniques in functional investigations of the human auditory system. Neuroimaging techniques may provide additional insight as to how the cortical auditory pathways change with experience, including sound deprivation (e.g., hearing loss) and sound experience (e.g., training). Such investigations may explain why some populations appear to be more vulnerable to changes in hemispheric symmetry such as children with learning problems and the elderly.
Collapse
Affiliation(s)
- Jill B Firszt
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri 63110, USA.
| | | | | |
Collapse
|
36
|
Metherate R, Kaur S, Kawai H, Lazar R, Liang K, Rose HJ. Spectral integration in auditory cortex: mechanisms and modulation. Hear Res 2005; 206:146-58. [PMID: 16081005 DOI: 10.1016/j.heares.2005.01.014] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2004] [Accepted: 01/06/2005] [Indexed: 11/19/2022]
Abstract
Auditory cortex contributes to the processing and perception of spectrotemporally complex stimuli. However, the mechanisms by which this is accomplished are not well understood. In this review, we examine evidence that single cortical neurons receive input covering much of the audible spectrum. We then propose an anatomical framework by which spectral information converges on single neurons in primary auditory cortex, via a combination of thalamocortical and intracortical "horizontal" pathways. By its nature, the framework confers sensitivity to specific, spectrotemporally complex stimuli. Finally, to address how spectral integration can be regulated, we show how one neuromodulator, acetylcholine, could act within the hypothesized framework to alter integration in single neurons. The results of these studies promote a cellular understanding of information processing in auditory cortex.
Collapse
Affiliation(s)
- Raju Metherate
- Department of Neurobiology and Behavior, University of California, Irvine, 2205 McGaugh Hall, Irvine, CA 92697-4550, United States.
| | | | | | | | | | | |
Collapse
|
37
|
Rose HJ, Metherate R. Auditory Thalamocortical Transmission Is Reliable and Temporally Precise. J Neurophysiol 2005; 94:2019-30. [PMID: 15928054 DOI: 10.1152/jn.00860.2004] [Citation(s) in RCA: 93] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
We have used the auditory thalamocortical slice to characterize thalamocortical transmission in primary auditory cortex (ACx) of the juvenile mouse. “Minimal” stimulation was used to activate medial geniculate neurons during whole cell recordings from regular-spiking (RS cells; mostly pyramidal) and fast-spiking (FS, putative inhibitory) neurons in ACx layers 3 and 4. Excitatory postsynaptic potentials (EPSPs) were considered monosynaptic (thalamocortical) if they met three criteria: low onset latency variability (jitter), little change in latency with increased stimulus intensity, and little change in latency during a high-frequency tetanus. Thalamocortical EPSPs were reliable (probability of postsynaptic responses to stimulation was ∼1.0) as well as temporally precise (low jitter). Both RS and FS neurons received thalamocortical input, but EPSPs in FS cells had faster rise times, shorter latencies to peak amplitude, and shorter durations than EPSPs in RS cells. Thalamocortical EPSPs depressed during repetitive stimulation at rates (2–300 Hz) consistent with thalamic spike rates in vivo, but at stimulation rates ≥40 Hz, EPSPs also summed to activate N-methyl-d-aspartate receptors and trigger long-lasting polysynaptic activity. We conclude that thalamic inputs to excitatory and inhibitory neurons in ACx activate reliable and temporally precise monosynaptic EPSPs that in vivo may contribute to the precise timing of acoustic-evoked responses.
Collapse
Affiliation(s)
- Heather J Rose
- Department of Neurobiology and Behavior, University of California, Irvine, 2205 McGaugh Hall, Irvine, California 92697-4550, USA
| | | |
Collapse
|
38
|
Kraus N, Nicol T. Brainstem origins for cortical ‘what’ and ‘where’ pathways in the auditory system. Trends Neurosci 2005; 28:176-81. [PMID: 15808351 DOI: 10.1016/j.tins.2005.02.003] [Citation(s) in RCA: 152] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
We have developed a data-driven conceptual framework that links two areas of science: the source-filter model of acoustics and cortical sensory processing streams. The source-filter model describes the mechanics behind speech production: the identity of the speaker is carried largely in the vocal cord source and the message is shaped by the ever-changing filters of the vocal tract. Sensory processing streams, popularly called 'what' and 'where' pathways, are well established in the visual system as a neural scheme for separately carrying different facets of visual objects, namely their identity and their position/motion, to the cortex. A similar functional organization has been postulated in the auditory system. Both speaker identity and the spoken message, which are simultaneously conveyed in the acoustic structure of speech, can be disentangled into discrete brainstem response components. We argue that these two response classes are early manifestations of auditory 'what' and 'where' streams in the cortex. This brainstem link forges a new understanding of the relationship between the acoustics of speech and cortical processing streams, unites two hitherto separate areas in science, and provides a model for future investigations of auditory function.
Collapse
Affiliation(s)
- Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Frances Searle Building, 2240 Campus Drive, Evanston, IL 60208, USA.
| | | |
Collapse
|
39
|
Ross B, Herdman AT, Pantev C. Right Hemispheric Laterality of Human 40 Hz Auditory Steady-state Responses. Cereb Cortex 2005; 15:2029-39. [PMID: 15772375 DOI: 10.1093/cercor/bhi078] [Citation(s) in RCA: 137] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
Hemispheric asymmetries during auditory sensory processing were examined using whole-head magnetoencephalographic recordings of auditory evoked responses to monaurally and binaurally presented amplitude-modulated sounds. Laterality indices were calculated for the transient onset responses (P1m and N1m), the transient gamma-band response, the sustained field (SF) and the 40 Hz auditory steady-state response (ASSR). All response components showed laterality toward the hemisphere contralateral to the stimulated ear. In addition, the SF and ASSR showed right hemispheric (RH) dominance. Thus, laterality of sustained response components (SF and ASSR) was distinct from that of transient responses. ASSR and SF are sensitive to stimulus periodicity. Consequently, ASSR and SF likely reflect periodic stimulus attributes and might be relevant for pitch processing based on temporal stimulus regularities. In summary, the results of the present studies demonstrate that asymmetric organization in the cerebral auditory cortex is already established on the level of sensory processing.
Collapse
Affiliation(s)
- B Ross
- The Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Canada, and Institute for Biomagnetism and Biosignalanalysis, Münster University Hospital, Germany.
| | | | | |
Collapse
|
40
|
Takegata R, Mariotto Roggia S, Näätänen R. A paradigm to measure mismatch negativity responses to phonetic and acoustic changes in parallel. Audiol Neurootol 2003; 8:234-41. [PMID: 12811004 DOI: 10.1159/000071063] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2002] [Accepted: 02/20/2003] [Indexed: 11/19/2022] Open
Abstract
The mismatch negativity (MMN) of the event-related potential has been purported to be an objective index of central auditory processing. The present study tested a new paradigm to measure the MMN responses to phonological changes in parallel with those to simple acoustic changes. Stimulus sequences consisted of repetitive consonant-vowel syllables interspersed with infrequent phonetic changes (in place of articulation or voicing) and repetitive sinusoidal tones with occasional acoustic changes (in frequency or duration). The speech and tone stimuli were delivered to the opposite ears (right and left, respectively) at a stimulus onset asynchrony (SOA) of 300 ms. The MMNs elicited in this new paradigm were compared with those measured in a conventional paradigm, in which the speech and tone stimuli were presented in separate sequences at an identical speech-speech or tone-tone SOA (600 ms) of the new paradigm. The MMNs elicited in the two paradigms had a similar morphology and topography, although the MMNs measured with the new paradigm were slightly smaller for 3 out of 4 types of deviants. The results suggest that the new paradigm enables the measurement of reliable MMNs to phonetic changes in parallel with those to acoustic changes.
Collapse
Affiliation(s)
- Rika Takegata
- Cognitive Brain Research Unit, Department of Psychology, University of Helsinki and Helsinki Brain Research Centre, Finland.
| | | | | |
Collapse
|
41
|
Abstract
Gap detection threshold (GDT) was measured in adult female pigmented rats (strain Long-Evans) by an operant conditioning technique with food reinforcement, before and after bilateral ablation of the auditory cortex. GDT was dependent on the frequency spectrum and intensity of the continuously present noise in which the gaps were embedded. The mean values of GDT for gaps embedded in white noise or low-frequency noise (upper cutoff frequency 3 kHz) at 70 dB sound pressure level (SPL) were 1.57+/-0.07 ms and 2.9+/-0.34 ms, respectively. Decreasing noise intensity from 80 dB SPL to 20 dB SPL produced a significant increase in GDT. The increase in GDT was relatively small in the range of 80-50 dB SPL for white noise and in the range of 80-60 dB for low-frequency noise. The minimal intensity level of the noise that enabled GDT measurement was 20 dB SPL for white noise and 30 dB SPL for low-frequency noise. Mean GDT values at these intensities were 10.6+/-3.9 ms and 31.3+/-4.2 ms, respectively. Bilateral ablation of the primary auditory cortex (complete destruction of the Te1 and partial destruction of the Te2 and Te3 areas) resulted in an increase in GDT values. The fifth day after surgery, the rats were able to detect gaps in the noise. The values of GDT observed at this time were 4.2+/-1.1 ms for white noise and 7.4+/-3.1 ms for low-frequency noise at 70 dB SPL. During the first month after cortical ablation, recovery of GDT was observed. However, 1 month after cortical ablation GDT still remained slightly higher than in controls (1.8+/-0.18 for white noise, 3.22+/-0.15 for low-frequency noise, P<0.05). A decrease in GDT values during the subsequent months was not observed.
Collapse
Affiliation(s)
- J Syka
- Institute of Experimental Medicine, Academy of Sciences of the Czech Republic, Prague, Czech Republic.
| | | | | | | |
Collapse
|
42
|
Cunningham J, Nicol T, King C, Zecker SG, Kraus N. Effects of noise and cue enhancement on neural responses to speech in auditory midbrain, thalamus and cortex. Hear Res 2002; 169:97-111. [PMID: 12121743 DOI: 10.1016/s0378-5955(02)00344-1] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Speech perception depends on the auditory system's ability to extract relevant acoustic features from competing background noise. Despite widespread acknowledgement that noise exacerbates this process, little is known about the neurophysiologic mechanisms underlying the encoding of speech in noise. Moreover, the relative contribution of different brain nuclei to these processes has not been fully established. To address these issues, aggregate neural responses were recorded from within the inferior colliculus, medial geniculate body and over primary auditory cortex of anesthetized guinea pigs to a synthetic vowel-consonant-vowel syllable /ada/ in quiet and in noise. In noise the onset response to the stop consonant /d/ was reduced or eliminated at each level, to the greatest degree in primary auditory cortex. Acoustic cue enhancements characteristic of 'clear' speech (lengthening the stop gap duration and increasing the intensity of the release burst) improved the neurophysiologic representation of the consonant at each level, especially at the cortex. Finally, the neural encoding of the vowel segment was evident at subcortical levels only, and was more resistant to noise than encoding of the dynamic portion of the consonant (release burst and formant transition). This experiment sheds light on which speech-sound elements are poorly represented in noise and demonstrates how acoustic modifications to the speech signal can improve neural responses in a normal auditory system. Implications for understanding neurophysiologic auditory signal processing in children with perceptual impairments and the design of efficient perceptual training strategies are also discussed.
Collapse
Affiliation(s)
- Jenna Cunningham
- Electrophysiology Laboratory, House Ear Institute, 2100 West Third Street, Los Angeles, CA 90057, USA.
| | | | | | | | | |
Collapse
|
43
|
Nicholls MER, Gora J, Stough CKK. Hemispheric asymmetries for visual and auditory temporal processing: an evoked potential study. Int J Psychophysiol 2002; 44:37-55. [PMID: 11852156 DOI: 10.1016/s0167-8760(01)00190-8] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Lateralization for temporal processing was investigated using evoked potentials to an auditory and visual gap detection task in 12 dextral adults. The auditory stimuli consisted of 300-ms bursts of white noise, half of which contained an interruption lasting 4 or 6 ms. The visual stimuli consisted of 130-ms flashes of light, half of which contained a gap lasting 6 or 8 ms. The stimuli were presented bilaterally to both ears or both visual fields. Participants made a forced two-choice discrimination using a bimanual response. Manipulations of the task had no effect on the early evoked components. However, an effect was observed for a late positive component, which occurred approximately 300-400 ms following gap presentation. This component tended to be later and lower in amplitude for the more difficult stimulus conditions. An index of the capacity to discriminate gap from no-gap stimuli was gained by calculating the difference waveform between these conditions. The peak of the difference waveform was delayed for the short-gap stimuli relative to the long-gap stimuli, reflecting decreased levels of difficulty associated with the latter stimuli. Topographic maps of the difference waveforms revealed a prominence over the left hemisphere. The visual stimuli had an occipital parietal focus whereas the auditory stimuli were parietally centered. These results confirm the importance of the left hemisphere for temporal processing and demonstrate that it is not the result of a hemispatial attentional bias or a peripheral sensory asymmetry.
Collapse
Affiliation(s)
- Michael E R Nicholls
- Department of Psychology, University of Melbourne,Parkville, VIC 3052, Australia.
| | | | | |
Collapse
|
44
|
Ross B, Picton TW, Pantev C. Temporal integration in the human auditory cortex as represented by the development of the steady-state magnetic field. Hear Res 2002; 165:68-84. [PMID: 12031517 DOI: 10.1016/s0378-5955(02)00285-x] [Citation(s) in RCA: 135] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The threshold for detecting amplitude modulation (AM) decreases with increasing duration of the AM sound up to several hundred milliseconds. If the auditory evoked steady-state response (SSR) to AM sound is an electrophysiological correlate of AM processing in the human brain, the development of the SSR should follow this course of temporal integration. Magnetoencephalographic recordings of SSR to 40 Hz AM tone-bursts were compared with responses to non-modulated tone-bursts at inter-stimulus intervals (ISIs) of 3, 1, and 0.5 s. Both types of stimuli elicited a transient gamma-band response (GBR), an N1 wave, and a sustained field (SF) during stimulus presentation. The AM stimulus evoked an additional 40 Hz SSR. The N1 amplitude was strongly reduced with shortened ISI, whereas the amplitudes of SSR, GBR, and SF were little affected by the ISI. Magnetic source-localization procedures estimated the generators of the early GBR, the SSR, and the SF to be anterior and medial to the sources of the N1. The sources of the SSR were in primary auditory cortex and separate from GBR sources. The SSR amplitude increased monotonically over a 200 ms period beginning about 40 ms after stimulus onset. The time course of the SSR phase reliably measured the duration of this transition to the steady state. At stimulus offset the SSR ceased within 50 ms. These results indicate that the primary auditory cortex responds immediately to stimulus changes and integrates stimulus features over a period of about 200 ms.
Collapse
Affiliation(s)
- Bernhard Ross
- Institute of Experimental Audiology, Münster University Hospital, Germany.
| | | | | |
Collapse
|
45
|
Eggermont JJ, Ponton CW. The neurophysiology of auditory perception: from single units to evoked potentials. Audiol Neurootol 2002; 7:71-99. [PMID: 12006736 DOI: 10.1159/000057656] [Citation(s) in RCA: 135] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
Evoked electric potential and magnetic field studies have the immense benefit that they can be conducted in awake, behaving humans and can be directly correlated with aspects of perception. As such, they are powerful objective indicators of perceptual properties. However, given a set of evoked potential and/or evoked field waveforms and their source locations, obtained for an exhaustive set of stimuli and stimulus contrasts, is it possible to determine blindly, i.e. predict, what the stimuli or stimulus contrasts were? If this can be done with some success, then a useful amount of information resides in scalp-recorded activity for, e.g., the study of auditory speech processing. In this review, we compare neural representations based on single-unit and evoked response activity for vowels and consonant-vowel phonemes with distinctions in formant glides and voice onset time. We conclude that temporal aspects of evoked responses can track some of the dominant response features present in single-unit activity. However, N1 morphology does not reliably predict phonetic identification of stimuli varying in voice onset time, and the reported appearance of a double-peak onset response in aggregate recordings from the auditory cortex does not indicate a cortical correlate of the perception of voicelessness. This suggests that temporal aspects of single-unit population activity are likely not inclusive enough for representation of categorical perception boundaries. In contrast to population activity based on single-unit recording, the ability to accurately localize the sources of scalp-evoked activity is one of the bottlenecks in obtaining an accessible neurophysiological substrate of perception. Attaining this is one of the requisites to arrive at the prospect of blind determination of stimuli on the basis of evoked responses. At the current sophistication level of recording and analysis, evoked responses remain in the realm of extremely sensitive objective indicators of stimulus change or stimulus differences. As such, they are signs of perceptual activity, but not comprehensive representations thereof.
Collapse
|
46
|
Tramo MJ, Shah GD, Braida LD. Functional role of auditory cortex in frequency processing and pitch perception. J Neurophysiol 2002; 87:122-39. [PMID: 11784735 DOI: 10.1152/jn.00104.1999] [Citation(s) in RCA: 89] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Microelectrode studies in nonhuman primates and other mammals have demonstrated that many neurons in auditory cortex are excited by pure tone stimulation only when the tone's frequency lies within a narrow range of the audible spectrum. However, the effects of auditory cortex lesions in animals and humans have been interpreted as evidence against the notion that neuronal frequency selectivity is functionally relevant to frequency discrimination. Here we report psychophysical and anatomical evidence in favor of the hypothesis that fine-grained frequency resolution at the perceptual level relies on neuronal frequency selectivity in auditory cortex. An adaptive procedure was used to measure difference thresholds for pure tone frequency discrimination in five humans with focal brain lesions and eight normal controls. Only the patient with bilateral lesions of primary auditory cortex and surrounding areas showed markedly elevated frequency difference thresholds: Weber fractions for frequency direction discrimination ("higher"-"lower" pitch judgments) were about eightfold higher than Weber fractions measured in patients with unilateral lesions of auditory cortex, auditory midbrain, or dorsolateral frontal cortex; Weber fractions for frequency change discrimination ("same"-"different" pitch judgments) were about seven times higher. In contrast, pure-tone detection thresholds, difference thresholds for pure tone duration discrimination centered at 500 ms, difference thresholds for vibrotactile intensity discrimination, and judgments of visual line orientation were within normal limits or only mildly impaired following bilateral auditory cortex lesions. In light of current knowledge about the physiology and anatomy of primate auditory cortex and a review of previous lesion studies, we interpret the present results as evidence that fine-grained frequency processing at the perceptual level relies on the integrity of finely tuned neurons in auditory cortex.
Collapse
Affiliation(s)
- Mark Jude Tramo
- Department of Neurology, Harvard Medical School and Massachusetts General Hospital, Boston, Massachusetts 02114-2696, USA.
| | | | | |
Collapse
|
47
|
Zatorre RJ, Belin P, Penhune VB. Structure and function of auditory cortex: music and speech. Trends Cogn Sci 2002; 6:37-46. [PMID: 11849614 DOI: 10.1016/s1364-6613(00)01816-7] [Citation(s) in RCA: 952] [Impact Index Per Article: 43.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We examine the evidence that speech and musical sounds exploit different acoustic cues: speech is highly dependent on rapidly changing broadband sounds, whereas tonal patterns tend to be slower, although small and precise changes in frequency are important. We argue that the auditory cortices in the two hemispheres are relatively specialized, such that temporal resolution is better in left auditory cortical areas and spectral resolution is better in right auditory cortical areas. We propose that cortical asymmetries might have developed as a general solution to the need to optimize processing of the acoustic environment in both temporal and frequency domains.
Collapse
Affiliation(s)
- Robert J. Zatorre
- Montreal Neurological Institute, 3801 University St, Que´bec, H3A 2B4, Montreal, Canada
| | | | | |
Collapse
|
48
|
Castillo EM, Simos PG, Davis RN, Breier J, Fitzgerald ME, Papanicolaou AC. Levels of word processing and incidental memory: dissociable mechanisms in the temporal lobe. Neuroreport 2001; 12:3561-6. [PMID: 11733712 DOI: 10.1097/00001756-200111160-00038] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Word recall is facilitated when deep (e.g. semantic) processing is applied during encoding. This fact raises the question of the existence of specific brain mechanisms supporting different levels of information processing that can modulate incidental memory performance. In this study we obtained spatiotemporal brain activation profiles, using magnetic source imaging, from 10 adult volunteers as they performed a shallow (phonological) processing task and a deep (semantic) processing task. When phonological analysis of the word stimuli into their constituent phonemes was required, activation was largely restricted to the posterior portion of the left superior temporal gyrus (area 22). Conversely, when access to lexical/semantic representations was required, activation was found predominantly in the left middle temporal gyrus and medial temporal cortex. The differential engagement of each mechanism during word encoding was associated with dramatic changes in subsequent incidental memory performance.
Collapse
Affiliation(s)
- E M Castillo
- Vivian L. Smith Center for Neurologic Research, Department of Neurosurgery, The University of Texas-Houston, Medical School, 6431 John Freeman Suite 304, Houston, TX 77030, USA
| | | | | | | | | | | |
Collapse
|
49
|
Poldrack RA, Temple E, Protopapas A, Nagarajan S, Tallal P, Merzenich M, Gabrieli JD. Relations between the neural bases of dynamic auditory processing and phonological processing: evidence from fMRI. J Cogn Neurosci 2001; 13:687-97. [PMID: 11506664 DOI: 10.1162/089892901750363235] [Citation(s) in RCA: 175] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Functional magnetic resonance imaging (fMRI) was used to examine how the brain responds to temporal compression of speech and to determine whether the same regions are also involved in phonological processes associated with reading. Recorded speech was temporally compressed to varying degrees and presented in a sentence verification task. Regions involved in phonological processing were identified in a separate scan using a rhyming judgment task with pseudowords compared to a lettercase judgment task. The left inferior frontal and left superior temporal regions (Broca's and Wernicke's areas), along with the right inferior frontal cortex, demonstrated a convex response to speech compression; their activity increased as compression increased, but then decreased when speech became incomprehensible. Other regions exhibited linear increases in activity as compression increased, including the middle frontal gyri bilaterally. The auditory cortices exhibited compression-related decreases bilaterally, primarily reflecting a decrease in activity when speech became incomprehensible. Rhyme judgments engaged two left inferior frontal gyrus regions (pars triangularis and pars opercularis), of which only the pars triangularis region exhibited significant compression-related activity. These results directly demonstrate that a subset of the left inferior frontal regions involved in phonological processing is also sensitive to transient acoustic features within the range of comprehensible speech.
Collapse
Affiliation(s)
- R A Poldrack
- MGH-NMR Center and Harvard Medical School, Charlestown, MA 02129, USA
| | | | | | | | | | | | | |
Collapse
|
50
|
Haist F, Song AW, Wild K, Faber TL, Popp CA, Morris RD. Linking sight and sound: fMRI evidence of primary auditory cortex activation during visual word recognition. BRAIN AND LANGUAGE 2001; 76:340-350. [PMID: 11247649 DOI: 10.1006/brln.2000.2433] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We describe two studies that used repetition priming paradigms to investigate brain activity during the reading of single words. Functional magnetic resonance images were collected during a visual lexical decision task in which nonword stimuli were manipulated with regard to phonological properties and compared to genuine English words. We observed a region in left-hemisphere primary auditory cortex linked to a repetition priming effect. The priming effect activity was observed only for stimuli that sound like known words; moreover, this region was sensitive to strategic task differences. Thus, a brain region involved in the most basic aspects of auditory processing appears to be engaged in reading even when there is no environmental oral or auditory component.
Collapse
Affiliation(s)
- F Haist
- Georgia State University, USA.
| | | | | | | | | | | |
Collapse
|