51
|
Goycoolea M, Mena I, Neubauer S. Is there a difference in activation or in inhibition of cortical auditory centers depending on the ear that is stimulated? Acta Otolaryngol 2009; 129:348-53. [PMID: 18985461 DOI: 10.1080/00016480802495420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
CONCLUSIONS 1.With auditory stimuli cortical activation of Brodmann's areas 39 and 40 and inhibition of area 38 is bilateral. Inhibitory and excitatory relays play a role in the auditory pathways. 2. A statistically significant increased activation on the left side in areas 39 and 40, regardless of the stimulated ear, is suggestive that pure tones are preferably processed in the left hemisphere. 3. The significant difference in central inhibition depending on which ear is stimulated is supportive of the idea of a leading ear. OBJECTIVES The objectives were to determine cortical activation/inhibition, ipsi/contralateral in response to monaural stimulation with pure tones, and if the response differs for right/left ear stimulation. SUBJECTS AND METHODS Tc99m-HMPAO brain perfusion SPECT was done during monaural stimulation with pure tones in 10 volunteers. Ears were tested independently. RESULTS During auditory stimulation perfusion increased in both hemispheres in Brodmann's areas 39-40 and decreased in area 38,>2 SD above and below the normal mean respectively, in both hemispheres, regardless of which side was stimulated. A significantly more intense response was seen in left versus right in areas 39 and 40. In area 38 there was bilateral inhibition, significantly more intense in response to left than right ear stimulation.
Collapse
Affiliation(s)
- Marcos Goycoolea
- Department of Otorhinolaringology, Clínica Las Condes, Santiago, Chile.
| | | | | |
Collapse
|
52
|
Marshall L, Lapsley Miller JA, Heller LM, Wolgemuth KS, Hughes LM, Smith SD, Kopke RD. Detecting incipient inner-ear damage from impulse noise with otoacoustic emissions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 125:995-1013. [PMID: 19206875 DOI: 10.1121/1.3050304] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Audiometric thresholds and otoacoustic emissions (OAEs) were measured in 285 U.S. Marine Corps recruits before and three weeks after exposure to impulse-noise sources from weapons' fire and simulated artillery, and in 32 non-noise-exposed controls. At pre-test, audiometric thresholds for all ears were <or=25 dB HL from 0.5 to 3 kHz and <or=30 dB HL at 4 kHz. Ears with low-level or absent OAEs at pre-test were more likely to be classified with significant threshold shifts (STSs) at post-test. A subgroup of 60 noise-exposed volunteers with complete data sets for both ears showed significant decreases in OAE amplitude but no change in audiometric thresholds. STSs and significant emission shifts (SESs) between 2 and 4 kHz in individual ears were identified using criteria based on the standard error of measurement from the control group. There was essentially no association between the occurrence of STS and SES. There were more SESs than STSs, and the group of SES ears had more STS ears than the group of no-SES ears. The increased sensitivity of OAEs in comparison to audiometric thresholds was shown in all analyses, and low-level OAEs indicate an increased risk of future hearing loss by as much as ninefold.
Collapse
Affiliation(s)
- Lynne Marshall
- Naval Submarine Medical Research Laboratory, Groton, Connecticut 06349-5900, USA.
| | | | | | | | | | | | | |
Collapse
|
53
|
Multiple Auditory Steady State Responses (80-101 Hz): Effects of Ear, Gender, Handedness, Intensity and Modulation Rate. Ear Hear 2009; 30:100-9. [DOI: 10.1097/aud.0b013e31819003ef] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
54
|
Hornickel J, Skoe E, Kraus N. Subcortical laterality of speech encoding. Audiol Neurootol 2008; 14:198-207. [PMID: 19122453 PMCID: PMC2806639 DOI: 10.1159/000188533] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2008] [Accepted: 07/30/2008] [Indexed: 11/19/2022] Open
Abstract
It is well established that in the majority of the population language processing is lateralized to the left hemisphere. Evidence suggests that lateralization is also present in the brainstem. In the current study, the syllable /da/ was presented monaurally to the right and left ears and electrophysiological responses from the brainstem were recorded in adults with symmetrical interaural click-evoked responses. Responses to the right-ear presentation occurred earlier than those to left-ear presentation in two peaks of the frequency following response (FFR) and approached significance for the third peak of the FFR and the offset peak. Interestingly, there were no differences in interpeak latencies indicating the response to right-ear presentation simply occurred earlier over this region. Analyses also showed more robust frequency encoding when stimuli were presented to the right ear than the left ear. The effect was found for the harmonics of the fundamental that correspond to the first formant of the stimulus, but was not seen in the fundamental frequency range. The results suggest that left lateralization of processing acoustic elements important for discriminating speech extends to the auditory brainstem and that these effects are speech specific.
Collapse
Affiliation(s)
- Jane Hornickel
- Auditory Neuroscience Lab., Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Evanston, IL, USA.
| | | | | |
Collapse
|
55
|
Sininger Y, Cone B. Comment on "Ear Asymmetries in middle-ear, cochlear, and brainstem responses in human infants" [J. Acoust. Soc. Am. 123, 1504-1512]. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:1401-1403. [PMID: 19045630 DOI: 10.1121/1.2956481] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
Following Sininger and Cone-Wesson [Science 305, 1581], Sininger and Cone-Wesson [Hear. Res. 212, 203-211], Keefe et al. [J. Acoust. Soc. Am. 123(3), 1504-1512] described ear asymmetries in middle ear, cochlear, and brainstem responses of infants. Keefe et al. state that their data do not support the findings of Sininger and Cone-Wesson [Science 305, 1581] who found asymmetries in evoked otoacoustic emissions and auditory brainstem responses and proposed that stimulus-directed asymmetries in processing may facilitate development of hemispheric specialization. The Keefe et al. findings, in fact, replicated and extended the findings of Sininger and Cone-Wesson (2004, 2006) and support, rather than refute, the conclusions. Keefe et al. controlled neither the background noise nor averaging time across test conditions (ear or otoacoustic emission type) and thus their separate analyses of signal and noise magnitude exceed the limitations imposed by the data collection methods.
Collapse
Affiliation(s)
- Yvonne Sininger
- University of California Los Angeles, 62-132 Center for the Health Sciences, Box 951624, Los Angeles, California 90095-1624, USA
| | | |
Collapse
|
56
|
Poeppel D, Idsardi WJ, van Wassenhove V. Speech perception at the interface of neurobiology and linguistics. Philos Trans R Soc Lond B Biol Sci 2008; 363:1071-86. [PMID: 17890189 PMCID: PMC2606797 DOI: 10.1098/rstb.2007.2160] [Citation(s) in RCA: 327] [Impact Index Per Article: 20.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Speech perception consists of a set of computations that take continuously varying acoustic waveforms as input and generate discrete representations that make contact with the lexical representations stored in long-term memory as output. Because the perceptual objects that are recognized by the speech perception enter into subsequent linguistic computation, the format that is used for lexical representation and processing fundamentally constrains the speech perceptual processes. Consequently, theories of speech perception must, at some level, be tightly linked to theories of lexical representation. Minimally, speech perception must yield representations that smoothly and rapidly interface with stored lexical items. Adopting the perspective of Marr, we argue and provide neurobiological and psychophysical evidence for the following research programme. First, at the implementational level, speech perception is a multi-time resolution process, with perceptual analyses occurring concurrently on at least two time scales (approx. 20-80 ms, approx. 150-300 ms), commensurate with (sub)segmental and syllabic analyses, respectively. Second, at the algorithmic level, we suggest that perception proceeds on the basis of internal forward models, or uses an 'analysis-by-synthesis' approach. Third, at the computational level (in the sense of Marr), the theory of lexical representation that we adopt is principally informed by phonological research and assumes that words are represented in the mental lexicon in terms of sequences of discrete segments composed of distinctive features. One important goal of the research programme is to develop linking hypotheses between putative neurobiological primitives (e.g. temporal primitives) and those primitives derived from linguistic inquiry, to arrive ultimately at a biologically sensible and theoretically satisfying model of representation and computation in speech.
Collapse
Affiliation(s)
- David Poeppel
- Department of Linguistics, University of Maryland, College Park, MD 20742, USA.
| | | | | |
Collapse
|
57
|
Keefe DH, Gorga MP, Jesteadt W, Smith LM. Ear asymmetries in middle-ear, cochlear, and brainstem responses in human infants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:1504-12. [PMID: 18345839 PMCID: PMC2493569 DOI: 10.1121/1.2832615] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
In 2004, Sininger and Cone-Wesson examined asymmetries in the signal-to-noise ratio (SNR) of otoacoustic emissions (OAE) in infants, reporting that distortion-product (DP)OAE SNR was larger in the left ear, whereas transient-evoked (TE)OAE SNR was larger in the right. They proposed that cochlear and brainstem asymmetries facilitate development of brain-hemispheric specialization for sound processing. Similarly, in 2006 Sininger and Cone-Wesson described ear asymmetries mainly favoring the right ear in infant auditory brainstem responses (ABRs). The present study analyzed 2640 infant responses to further explore these effects. Ear differences in OAE SNR, signal, and noise were evaluated separately and across frequencies (1.5, 2, 3, and 4 kHz), and ABR asymmetries were compared with cochlear asymmetries. Analyses of ear-canal reflectance and admittance showed that asymmetries in middle-ear functioning did not explain cochlear and brainstem asymmetries. Current results are consistent with earlier studies showing right-ear dominance for TEOAE and ABR. Noise levels were higher in the right ear for OAEs and ABRs, causing ear asymmetries in SNR to differ from those in signal level. No left-ear dominance for DPOAE signal was observed. These results do not support a theory that ear asymmetries in cochlear processing mimic hemispheric brain specialization for auditory processing.
Collapse
Affiliation(s)
- Douglas H Keefe
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA.
| | | | | | | |
Collapse
|
58
|
Kompis M, Krebs M, Häusler R. [Verification of normative values for the Swiss version of the Freiburg speech intelligibility test]. HNO 2007; 54:445-50. [PMID: 16189713 DOI: 10.1007/s00106-005-1337-8] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
BACKGROUND AND OBJECTIVE In the Swiss version of the Freiburg speech intelligibility test five test words from the original German recording which are rarely used in Switzerland have been exchanged. Furthermore, differences in the transfer functions between headphone and loudspeaker presentation are not taken into account during calibration. New settings for the levels of the individual test words in the recommended recording and small changes in calibration procedures led us to make a verification of the currently used normative values. PATIENTS AND METHODS Speech intelligibility was measured in 20 subjects with normal hearing using monosyllabic words and numbers via headphones and loudspeakers. RESULTS On average, 50% speech intelligibility was reached at levels which were 7.5 dB lower under free-field conditions than for headphone presentation. The average difference between numbers and monosyllabic words was found to be 9.6 dB, which is considerably lower than the 14 dB of the current normative curves. CONCLUSIONS There is a good agreement between our measurements and the normative values for tests using monosyllabic words and headphones, but not for numbers or free-field measurements.
Collapse
Affiliation(s)
- M Kompis
- Klinik für Hals-, Nasen- und Ohrenheilkunde, Hals-, Kiefer- und Gesichtschirurgie, Inselspital - Universität Bern.
| | | | | |
Collapse
|
59
|
Guinan JJ. Olivocochlear efferents: anatomy, physiology, function, and the measurement of efferent effects in humans. Ear Hear 2007; 27:589-607. [PMID: 17086072 DOI: 10.1097/01.aud.0000240507.83072.e7] [Citation(s) in RCA: 417] [Impact Index Per Article: 24.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review covers the basic anatomy and physiology of the olivocochlear reflexes and the use of otoacoustic emissions (OAEs) in humans to monitor the effects of one group, the medial olivocochlear (MOC) efferents. MOC fibers synapse on outer hair cells (OHCs), and activation of these fibers inhibits basilar membrane responses to low-level sounds. This MOC-induced decrease in the gain of the cochlear amplifier is reflected in changes in OAEs. Any OAE can be used to monitor MOC effects on the cochlear amplifier. Each OAE type has its own advantages and disadvantages. The most straightforward technique for monitoring MOC effects is to elicit MOC activity with an elicitor sound contralateral to the OAE test ear. MOC effects can also be monitored using an ipsilateral elicitor of MOC activity, but the ipsilateral elicitor brings additional problems caused by suppression and cochlear slow intrinsic effects. To measure MOC effects accurately, one must ensure that there are no middle-ear-muscle contractions. Although standard clinical middle-ear-muscle tests are not adequate for this, adequate tests can usually be done with OAE-measuring instruments. An additional complication is that most probe sounds also elicit MOC activity, although this does not prevent the probe from showing MOC effects elicited by contralateral sound. A variety of data indicate that MOC efferents help to reduce acoustic trauma and lessen the masking of transients by background noise; for instance, they aid in speech comprehension in noise. However, much remains to be learned about the role of efferents in auditory function. Monitoring MOC effects in humans using OAEs should continue to provide valuable insights into the role of MOC efferents and may also provide clinical benefits.
Collapse
|
60
|
Morris LG, Mallur PS, Roland JT, Waltzman SB, Lalwani AK. Implication of Central Asymmetry in Speech Processing on Selecting the Ear for Cochlear Implantation. Otol Neurotol 2007; 28:25-30. [PMID: 17195742 DOI: 10.1097/01.mao.0000244365.24449.00] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE Emerging evidence in auditory neuroscience suggests that central auditory pathways process speech asymmetrically. In concert with left cortical specialization for speech, a "right-ear advantage" in speech perception has been identified. The purpose of this study is to determine if this central asymmetry in speech processing has implications for selecting the ear for cochlear implantation. STUDY DESIGN Retrospective cohort study. SETTING Academic university medical center PATIENTS One hundred one adults with bilateral severe-to-profound sensorineural hearing loss INTERVENTION Cochlear implantation with the Nucleus 24 Contour device. MAIN OUTCOME MEASUREMENTS Patients were divided into two groups according to the ear implanted. Results were compared between left-ear- and right-ear-implanted patients. Further subgroup analysis was undertaken, limited to right-handed patients. Postoperative improvement on audiograms and scores on speech perception tests (Hearing in Noise test, City University of New York in quiet and in noise test, Consonant-Vowel Nucleus- Consonant words, and phonemes) at 1 year was compared between groups. Analysis of covariance was used to control for any intergroup differences in preoperative characteristics. RESULTS The groups were matched in age, duration of hearing loss, duration of hearing aid use, percentage implanted in the better hearing ear, and preoperative audiologic testing. Postoperatively, there were no differences between left-ear- and right-ear-implanted patients in improvement on speech recognition tests. CONCLUSION Despite central asymmetry in speech processing, our data do not support a right-ear advantage in speech perception outcomes with cochlear implantation. Therefore, among the many factors in choosing the ear for cochlear implantation, central asymmetry in speech processing does not seem to be a contributor to postoperative speech recognition outcomes.
Collapse
Affiliation(s)
- Luc G Morris
- Department of Otolaryngology and Cochlear Implant Center, New York University School of Medicine, New York, New York 10016, USA
| | | | | | | | | |
Collapse
|
61
|
Dehaene-Lambertz G, Hertz-Pannier L, Dubois J, Mériaux S, Roche A, Sigman M, Dehaene S. Functional organization of perisylvian activation during presentation of sentences in preverbal infants. Proc Natl Acad Sci U S A 2006; 103:14240-5. [PMID: 16968771 PMCID: PMC1599941 DOI: 10.1073/pnas.0606302103] [Citation(s) in RCA: 212] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2006] [Indexed: 11/18/2022] Open
Abstract
We examined the functional organization of cerebral activity in 3-month-old infants when they were listening to their mother language. Short sentences were presented in a slow event-related functional MRI paradigm. We then parsed the infant's network of perisylvian responsive regions into functionally distinct regions based on their speed of activation and sensitivity to sentence repetition. An adult-like structure of functional MRI response delays was observed along the superior temporal regions, suggesting a hierarchical processing scheme. The fastest responses were recorded in the vicinity of Heschl's gyrus, whereas responses became increasingly slower toward the posterior part of the superior temporal gyrus and toward the temporal poles and inferior frontal regions (Broca's area). Activation in the latter region increased when the sentence was repeated after a 14-s delay, suggesting the early involvement of Broca's area in verbal memory. The fact that Broca's area is active in infants before the babbling stage implies that activity in this region is not the consequence of sophisticated motor learning but, on the contrary, that this region may drive, through interactions with the perceptual system, the learning of the complex motor sequences required for future speech production. Our results point to a complex, hierarchical organization of the human brain in the first months of life, which may play a crucial role in language acquisition in our species.
Collapse
Affiliation(s)
- Ghislaine Dehaene-Lambertz
- Institut National de la Santé et de la Recherche Médicale, U562, and Commissariat à l'Energie Atomique, 4 Place du Général Leclerc, 91400 Orsay, France.
| | | | | | | | | | | | | |
Collapse
|
62
|
Plante E, Holland SK, Schmithorst VJ. Prosodic processing by children: an fMRI study. BRAIN AND LANGUAGE 2006; 97:332-42. [PMID: 16460792 PMCID: PMC1463022 DOI: 10.1016/j.bandl.2005.12.004] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2004] [Revised: 11/29/2005] [Accepted: 12/17/2005] [Indexed: 05/06/2023]
Abstract
Prosodic information in the speech signal carries information about linguistic structure as well as emotional content. Although children are known to use prosodic information from infancy onward to assist linguistic decoding, the brain correlates of this skill in childhood have not yet been the subject of study. Brain activation associated with processing of linguistic prosody was examined in a study of 284 normally developing children between the ages of 5 and 18 years. Children listened to low-pass filtered sentences and were asked to detect those that matched a target sentence. fMRI scanning revealed multiple regions of activation that predicted behavioral performance, independent of age-related changes in activation. Likewise, age-related changes in task activation were found that were independent of differences in task accuracy. The overall pattern of activation is interpreted in light of task demands and factors that may underlie age-related changes in task performance.
Collapse
|
63
|
Firszt JB, Ulmer JL, Gaggl W. Differential representation of speech sounds in the human cerebral hemispheres. THE ANATOMICAL RECORD. PART A, DISCOVERIES IN MOLECULAR, CELLULAR, AND EVOLUTIONARY BIOLOGY 2006; 288:345-57. [PMID: 16550560 PMCID: PMC3780356 DOI: 10.1002/ar.a.20295] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Various methods in auditory neuroscience have been used to gain knowledge about the structure and function of the human auditory cortical system. Regardless of method, hemispheric differences are evident in the normal processing of speech sounds. This review article, augmented by the authors' own work, provides evidence that asymmetries exist in both cortical and subcortical structures of the human auditory system. Asymmetries are affected by stimulus type, for example, hemispheric activation patterns have been shown to change from right to left cortex as stimuli change from speech to nonspeech. In addition, the presence of noise has differential effects on the contribution of the two hemispheres. Modifications of typical asymmetric cortical patterns occur when pathology is present, as in hearing loss or tinnitus. We show that in response to speech sounds, individuals with unilateral hearing loss lose the normal asymmetric pattern due to both a decrease in contralateral hemispheric activity and an increase in the ipsilateral hemisphere. These studies demonstrate the utility of modern neuroimaging techniques in functional investigations of the human auditory system. Neuroimaging techniques may provide additional insight as to how the cortical auditory pathways change with experience, including sound deprivation (e.g., hearing loss) and sound experience (e.g., training). Such investigations may explain why some populations appear to be more vulnerable to changes in hemispheric symmetry such as children with learning problems and the elderly.
Collapse
Affiliation(s)
- Jill B Firszt
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri 63110, USA.
| | | | | |
Collapse
|
64
|
Jedrzejczak WW, Blinowska KJ, Konopka W. Resonant modes in transiently evoked otoacoustic emissions and asymmetries between left and right ear. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2006; 119:2226-31. [PMID: 16642837 DOI: 10.1121/1.2178718] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
A number of single-frequency resonant modes in click evoked otoacoustic emissions (OAEs) was investigated. The OAE modes were identified by means of an adaptive approximation method based on the matching pursuit (MP) algorithm. The signals were decomposed into basic waveforms coming from a very large and redundant dictionary of Gabor functions. The study was performed on transiently evoked otoacoustic emissions (TEOAEs) from left and right ears of 108 subjects. The correspondence between waveforms found by the procedure and resonant modes was shown (both for simulated noisy data and for single-person TEOAEs). The decomposition of TEOAEs made distinction between short and long-lasting components possible. The number of main resonant modes was studied by means of different criteria and they all led to similar results, indicating that the main features of the signal are explained on average by 10 waveforms. The same number of resonant modes for the right ear accounted for more energy than for the left ear.
Collapse
Affiliation(s)
- W Wiktor Jedrzejczak
- Department of Biomedical Physics, Institute of Experimental Physics, Warsaw University, Hoza 69 st., 00-681 Warszawa, Poland
| | | | | |
Collapse
|
65
|
Sininger YS, Cone-Wesson B. Lateral asymmetry in the ABR of neonates: Evidence and mechanisms. Hear Res 2006; 212:203-11. [PMID: 16439078 DOI: 10.1016/j.heares.2005.12.003] [Citation(s) in RCA: 47] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2005] [Revised: 11/17/2005] [Accepted: 12/06/2005] [Indexed: 11/23/2022]
Abstract
Lateralized processing of auditory stimuli occurs at the level of the auditory cortex but differences in function between the left and right sides are not clear at lower levels of the auditory system. The current study is designed to (1) investigate asymmetric auditory function at the ear and brainstem in human infants and (2) investigate possible mechanisms for asymmetry at these levels. Study 1 evaluated auditory brainstem responses (ABRs) in response to high and low-level clicks presented to the right and left ears of neonates. Wave V was significantly larger in amplitude and waves III and V were shorter in latency when the ABR was generated in the right ear. Study 2 investigated two possible mechanisms of such asymmetry by (a) using contralateral white noise masking to activate the medial olivocochlear system and (b) increasing stimulus rate to reveal neural conduction and synaptic mechanisms. ABR wave V, evoked by clicks to the left ear, showed a greater reduction in amplitude with contralateral noise than the response evoked from the right ear. No systematic asymmetries in ABR latencies or amplitudes were found with increased stimulus rate. We conclude that (1) the click-evoked ABR in neonates demonstrates asymmetric auditory function with a small but significant right ear advantage and (2) asymmetric activation of the medial olivocochlear system, specifically greater contralateral suppression of ABR produced by the left ear, is a possible mechanism for asymmetry.
Collapse
Affiliation(s)
- Yvonne S Sininger
- UCLA David Geffen School of Medicine, Division of Head & Neck Surgery, 62-132 Center for Health Science, Box 951624, Los Angeles, CA 90095-1624, United States.
| | | |
Collapse
|
66
|
Jamison HL, Watkins KE, Bishop DVM, Matthews PM. Hemispheric Specialization for Processing Auditory Nonspeech Stimuli. Cereb Cortex 2005; 16:1266-75. [PMID: 16280465 DOI: 10.1093/cercor/bhj068] [Citation(s) in RCA: 143] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The left hemisphere specialization for speech perception might arise from asymmetries at more basic levels of auditory processing. In particular, it has been suggested that differences in "temporal" and "spectral" processing exist between the hemispheres. Here we used functional magnetic resonance imaging to test this hypothesis further. Fourteen healthy volunteers listened to sequences of alternating pure tones that varied in the temporal and spectral domains. Increased temporal variation was associated with activation in Heschl's gyrus (HG) bilaterally, whereas increased spectral variation activated the superior temporal gyrus (STG) bilaterally and right posterior superior temporal sulcus (STS). Responses to increased temporal variation were lateralized to the left hemisphere; this left lateralization was greater in posteromedial HG, which is presumed to correspond to the primary auditory cortex. Responses to increased spectral variation were lateralized to the right hemisphere specifically in the anterior STG and posterior STS. These findings are consistent with the notion that the hemispheres are differentially specialized for processing auditory stimuli even in the absence of linguistic information.
Collapse
Affiliation(s)
- Helen L Jamison
- Centre for Functional Magnetic Resonance Imaging of the Brain, University of Oxford, Oxford, UK.
| | | | | | | |
Collapse
|