1
|
Glavin CC, Dhar S. The Ins and Outs of Distortion Product Otoacoustic Emission Growth: A Review. J Assoc Res Otolaryngol 2024:10.1007/s10162-024-00969-8. [PMID: 39592507 DOI: 10.1007/s10162-024-00969-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2024] [Accepted: 11/14/2024] [Indexed: 11/28/2024] Open
Abstract
Otoacoustic emissions (OAEs) are low-level signals generated from active processes related to outer hair cell transduction in the cochlea. In current clinical applications, OAEs are typically used to detect the presence or absence of hearing loss. However, their potential extends far beyond hearing screenings. Dr. Glenis Long realized this unfulfilled potential decades ago. She subsequently devoted a large portion of her storied scientific career to understanding OAEs and cochlear mechanics, particularly at the intersection of OAEs and perceptual measures. One specific application of OAEs that has yet to be translated from research laboratories to the clinic is using them to non-invasively characterize cochlear nonlinearity-a hallmark feature of a healthy cochlea-across a wide dynamic range. This can be done by measuring OAEs across input levels to obtain an OAE growth, or input-output (I/O), function. In this review, we describe distortion product OAE (DPOAE) growth and its relation to cochlear nonlinearity and mechanics. We then review biological and measurement factors that are known to influence OAE growth and finish with a discussion of potential applications. Throughout the review, we emphasize Dr. Long's many contributions to the field.
Collapse
Affiliation(s)
- Courtney Coburn Glavin
- Roxelyn and Richard Pepper Department of Communication Sciences & Disorders, Northwestern University, Evanston, IL, 60208, USA.
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, 98105, USA.
| | - Sumitrajit Dhar
- Roxelyn and Richard Pepper Department of Communication Sciences & Disorders, Northwestern University, Evanston, IL, 60208, USA
- Knowles Hearing Center, Evanston, IL, 60208, USA
| |
Collapse
|
2
|
Irino T, Yokota K, Patterson RD. Improving Auditory Filter Estimation by Incorporating Absolute Threshold and a Level-dependent Internal Noise. Trends Hear 2023; 27:23312165231209750. [PMID: 37905400 PMCID: PMC10619342 DOI: 10.1177/23312165231209750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 10/07/2023] [Indexed: 11/02/2023] Open
Abstract
Auditory filter (AF) shape has traditionally been estimated with a combination of a notched-noise (NN) masking experiment and a power spectrum model (PSM) of masking. However, there are several challenges that remain in both the simultaneous and forward masking paradigms. We hypothesized that AF shape estimation would be improved if absolute threshold (AT) and a level-dependent internal noise were explicitly represented in the PSM. To document the interaction between NN threshold and AT in normal hearing (NH) listeners, a large set of NN thresholds was measured at four center frequencies (500, 1000, 2000, and 4000 Hz) with the emphasis on low-level maskers. The proposed PSM, consisting of the compressive gammachirp (cGC) filter and three nonfilter parameters, allowed AF estimation over a wide range of frequencies and levels with fewer coefficients and less error than previous models. The results also provided new insights into the nonfilter parameters. The detector signal-to-noise ratio (K ) was found to be constant across signal frequencies, suggesting that no frequency dependence hypothesis is required in the postfiltering process. The ANSI standard "Hearing Level-0dB" function, i.e., AT of NH listeners, could be applied to the frequency distribution of the noise floor for the best AF estimation. The introduction of a level-dependent internal noise could mitigate the nonlinear effects that occur in the simultaneous NN masking paradigm. The new PSM improves the applicability of the model, particularly when the sound pressure level of the NN threshold is close to AT.
Collapse
Affiliation(s)
- Toshio Irino
- Faculty of Systems Engineering, Wakayama University, Japan
| | - Kenji Yokota
- Faculty of Systems Engineering, Wakayama University, Japan
| | - Roy D. Patterson
- Department of Physiology, Development and Neuroscience, University
of Cambridge, UK
| |
Collapse
|
3
|
Signatures of cochlear processing in neuronal coding of auditory information. Mol Cell Neurosci 2022; 120:103732. [PMID: 35489636 DOI: 10.1016/j.mcn.2022.103732] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2021] [Revised: 04/19/2022] [Accepted: 04/21/2022] [Indexed: 11/22/2022] Open
Abstract
The vertebrate ear is endowed with remarkable perceptual capabilities. The faintest sounds produce vibrations of magnitudes comparable to those generated by thermal noise and can nonetheless be detected through efficient amplification of small acoustic stimuli. Two mechanisms have been proposed to underlie such sound amplification in the mammalian cochlea: somatic electromotility and active hair-bundle motility. These biomechanical mechanisms may work in concert to tune auditory sensitivity. In addition to amplitude sensitivity, the hearing system shows exceptional frequency discrimination allowing mammals to distinguish complex sounds with great accuracy. For instance, although the wide hearing range of humans encompasses frequencies from 20 Hz to 20 kHz, our frequency resolution extends to one-thirtieth of the interval between successive keys on a piano. In this article, we review the different cochlear mechanisms underlying sound encoding in the auditory system, with a particular focus on the frequency decomposition of sounds. The relation between peak frequency of activation and location along the cochlea - known as tonotopy - arises from multiple gradients in biophysical properties of the sensory epithelium. Tonotopic mapping represents a major organizational principle both in the peripheral hearing system and in higher processing levels and permits the spectral decomposition of complex tones. The ribbon synapses connecting sensory hair cells to auditory afferents and the downstream spiral ganglion neurons are also tuned to process periodic stimuli according to their preferred frequency. Though sensory hair cells and neurons necessarily filter signals beyond a few kHz, many animals can hear well beyond this range. We finally describe how the cochlear structure shapes the neural code for further processing in order to send meaningful information to the brain. Both the phase-locked response of auditory nerve fibers and tonotopy are key to decode sound frequency information and place specific constraints on the downstream neuronal network.
Collapse
|
4
|
Marcenaro B, Leiva A, Dragicevic C, López V, Delano PH. The medial olivocochlear reflex strength is modulated during a visual working memory task. J Neurophysiol 2021; 125:2309-2321. [PMID: 33978484 DOI: 10.1152/jn.00032.2020] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
Top-down modulation of sensory responses to distracting stimuli by selective attention has been proposed as an important mechanism by which our brain can maintain relevant information during working memory tasks. Previous works in visual working memory (VWM) have reported modulation of neural responses to distracting sounds at different levels of the central auditory pathways. Whether these modulations occur also at the level of the auditory receptor is unknown. Here, we hypothesize that cochlear responses to irrelevant auditory stimuli can be modulated by the medial olivocochlear system during VWM. Twenty-one subjects (13 males, mean age 25.3 yr) with normal hearing performed a visual change detection task with different VWM load conditions (high load = 4 visual objects; low load = 2 visual objects). Auditory stimuli were presented as distractors and allowed the measurement of distortion product otoacoustic emissions (DPOAEs) and scalp auditory evoked potentials. In addition, the medial olivocochlear reflex strength was evaluated by adding contralateral acoustic stimulation. We found larger contralateral acoustic suppression of DPOAEs during the visual working memory period (n = 21) compared with control experiments (n = 10), in which individuals were passively exposed to the same experimental conditions. These results show that during the visual working memory period there is a modulation of the medial olivocochlear reflex strength, suggesting a possible common mechanism for top-down filtering of auditory responses during cognitive processes.NEW & NOTEWORTHY The auditory efferent system has been proposed to function as a biological filter of cochlear responses during selective attention. Here, we recorded electroencephalographic activity and otoacoustic emissions in response to auditory distractors during a visual working memory task in humans. We found that the olivocochlear efferent activity is modulated during the visual working memory period suggesting a common mechanism for suppressing cochlear responses during selective attention and working memory.
Collapse
Affiliation(s)
- Bruno Marcenaro
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaiso, Chile.,Interdisciplinary Center of Neuroscience, Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Alexis Leiva
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Biomedical Neuroscience Institute, BNI, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Constantino Dragicevic
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Biomedical Neuroscience Institute, BNI, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Vladimir López
- Escuela de Psicología, Pontificia Universidad Católica de Chile, Santiago, Chile
| | - Paul H Delano
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Otolaryngology Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile.,Biomedical Neuroscience Institute, BNI, Facultad de Medicina, Universidad de Chile, Santiago, Chile.,Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaiso, Chile
| |
Collapse
|
5
|
Baby D, Van Den Broucke A, Verhulst S. A convolutional neural-network model of human cochlear mechanics and filter tuning for real-time applications. NAT MACH INTELL 2021; 3:134-143. [PMID: 33629031 PMCID: PMC7116797 DOI: 10.1038/s42256-020-00286-8] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
Abstract
Auditory models are commonly used as feature extractors for automatic speech-recognition systems or as front-ends for robotics, machine-hearing and hearing-aid applications. Although auditory models can capture the biophysical and nonlinear properties of human hearing in great detail, these biophysical models are computationally expensive and cannot be used in real-time applications. We present a hybrid approach where convolutional neural networks are combined with computational neuroscience to yield a real-time end-to-end model for human cochlear mechanics, including level-dependent filter tuning (CoNNear). The CoNNear model was trained on acoustic speech material and its performance and applicability were evaluated using (unseen) sound stimuli commonly employed in cochlear mechanics research. The CoNNear model accurately simulates human cochlear frequency selectivity and its dependence on sound intensity, an essential quality for robust speech intelligibility at negative speech-to-background-noise ratios. The CoNNear architecture is based on parallel and differentiable computations and has the power to achieve real-time human performance. These unique CoNNear features will enable the next generation of human-like machine-hearing applications.
Collapse
Affiliation(s)
- Deepak Baby
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Arthur Van Den Broucke
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| | - Sarah Verhulst
- Hearing Technology @ WAVES, Dept. of Information Technology, Ghent University, 9000 Ghent, Belgium
| |
Collapse
|
6
|
Nechaev DI, Milekhina ON, Tomozova MS, Supin AY. High Ripple-Density Resolution for Discriminating Between Rippled and Nonrippled Signals: Effect of Temporal Processing or Combination Products? Trends Hear 2021; 25:23312165211010163. [PMID: 33926309 PMCID: PMC8111533 DOI: 10.1177/23312165211010163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 03/19/2021] [Accepted: 03/24/2021] [Indexed: 11/17/2022] Open
Abstract
The goal of the study was to investigate the role of combination products in the higher ripple-density resolution estimates obtained by discrimination between a spectrally rippled and a nonrippled noise signal than that obtained by discrimination between two rippled signals. To attain this goal, a noise band was used to mask the frequency band of expected low-frequency combination products. A three-alternative forced-choice procedure with adaptive ripple-density variation was used. The mean background (unmasked) ripple-density resolution was 9.8 ripples/oct for rippled reference signals and 21.8 ripples/oct for nonrippled reference signals. Low-frequency maskers reduced the ripple-density resolution. For masker levels from -10 to 10 dB re. signal, the ripple-density resolution for nonrippled reference signals was approximately twice as high as that for rippled reference signals. At a masker level as high as 20 dB re. signal, the ripple-density resolution decreased in both discrimination tasks. This result leads to the conclusion that low-frequency combination products are not responsible for the task-dependent difference in ripple-density resolution estimates.
Collapse
Affiliation(s)
- Dmitry I. Nechaev
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Olga N. Milekhina
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Marina S. Tomozova
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Alexander Y. Supin
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| |
Collapse
|
7
|
Olson ES, Strimbu CE. Cochlear mechanics: new insights from vibrometry and Optical Coherence Tomography. CURRENT OPINION IN PHYSIOLOGY 2020; 18:56-62. [PMID: 33103018 DOI: 10.1016/j.cophys.2020.08.022] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
The cochlea is a complex biological machine that transduces sound-induced mechanical vibrations to neural signals. Hair cells within the sensory tissue of the cochlea transduce vibrations into electrical signals, and exert electromechanical feedback that enhances the passive frequency separation provided by the cochlea's traveling wave mechanics; this enhancement is termed cochlear amplification. The vibration of the sensory tissue has been studied with many techniques, and the current state of the art is optical coherence tomography (OCT). The OCT technique allows for motion of intra-organ structures to be measured in vivo at many layers within the sensory tissue, at several angles and in previously under-explored species. OCT-based observations are already impacting our understanding of hair cell excitation and cochlear amplification.
Collapse
Affiliation(s)
- Elizabeth S Olson
- Department of Otolaryngolgy Head and Neck Surgery, Vagelos College of Physicians and Surgeons, Columbia University, 630 W 168th St, New York, NY 10032.,Department Biomedical Engineering, Columbia University, 351 Engineering Terrace, 1210 Amsterdam Avenue,New York, NY 10027
| | - C Elliott Strimbu
- Department of Otolaryngolgy Head and Neck Surgery, Vagelos College of Physicians and Surgeons, Columbia University, 630 W 168th St, New York, NY 10032
| |
Collapse
|
8
|
Altoè A, Shera CA. Nonlinear cochlear mechanics without direct vibration-amplification feedback. PHYSICAL REVIEW RESEARCH 2020; 2:013218. [PMID: 33403361 PMCID: PMC7781069 DOI: 10.1103/physrevresearch.2.013218] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Recent in vivo recordings from the mammalian cochlea indicate that although the motion of the basilar membrane appears actively amplified and nonlinear only at frequencies relatively close to the peak of the response, the internal motions of the organ of Corti display these same features over a much wider range of frequencies. These experimental findings are not easily explained by the textbook view of cochlear mechanics, in which cochlear amplification is controlled by the motion of the basilar membrane (BM) in a tight, closed-loop feedback configuration. This study shows that a simple phenomenological model of the cochlea inspired by the work of Zweig [J. Acoust. Soc. Am. 138, 1102 (2015)] can account for recent data in mouse and gerbil. In this model, the active forces are regulated indirectly, through the effect of BM motion on the pressure field across the cochlear partition, rather than via direct coupling between active-force generation and BM vibration. The absence of strong vibration-amplification feedback in the cochlea also provides a compelling explanation for the observed intensity invariance of fine time structure in the BM response to acoustic clicks.
Collapse
Affiliation(s)
| | - Christopher A. Shera
- Auditory Research Center, Caruso Department of Otolaryngology, University of Southern California, Los Angeles, California 90033, USA
- Department of Physics & Astronomy, University of Southern California, California 90089, USA
| |
Collapse
|
9
|
Two-tone distortion in reticular lamina vibration of the living cochlea. Commun Biol 2020; 3:35. [PMID: 31965040 PMCID: PMC6972885 DOI: 10.1038/s42003-020-0762-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2019] [Accepted: 01/06/2020] [Indexed: 11/09/2022] Open
Abstract
It has been demonstrated that isolated auditory sensory cells, outer hair cells, can generate distortion products at low frequencies. It remains unknown, however, whether or not motile outer hair cells are able to generate two-tone distortion at high frequencies in living cochleae under the mechanical loads caused by surounding tissues and fluids. By measuring sub-nanometer vibration directly from the apical ends of outer hair cells using a custom-built heterodyne low-coherence interferometer, here we show outer hair cell-generated two-tone distortion in reticular lamina motion in the living cochlea. Reticular-lamina distortion is significantly greater and occurs at a broader frequency range than that of the basilar membrane. Contrary to expectations, our results indicate that motile outer hair cells are capable of generating two-tone distortion in vivo not only at the locations tuned to primary tones but also at a broad region basal to these locations. Ren et al. used an in house heterodyne low-coherence interferometer to measure sub-nanometer vibrations, a proxy for distortion products, in living cochleae of gerbils. They were able to locate the generation source of the outer hair cell in the reticular lamina versus the basilar membrane in vivo.
Collapse
|
10
|
Gourévitch B, Mahrt EJ, Bakay W, Elde C, Portfors CV. GABA A receptors contribute more to rate than temporal coding in the IC of awake mice. J Neurophysiol 2020; 123:134-148. [PMID: 31721644 DOI: 10.1152/jn.00377.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
Speech is our most important form of communication, yet we have a poor understanding of how communication sounds are processed by the brain. Mice make great model organisms to study neural processing of communication sounds because of their rich repertoire of social vocalizations and because they have brain structures analogous to humans, such as the auditory midbrain nucleus inferior colliculus (IC). Although the combined roles of GABAergic and glycinergic inhibition on vocalization selectivity in the IC have been studied to a limited degree, the discrete contributions of GABAergic inhibition have only rarely been examined. In this study, we examined how GABAergic inhibition contributes to shaping responses to pure tones as well as selectivity to complex sounds in the IC of awake mice. In our set of long-latency neurons, we found that GABAergic inhibition extends the evoked firing rate range of IC neurons by lowering the baseline firing rate but maintaining the highest probability of firing rate. GABAergic inhibition also prevented IC neurons from bursting in a spontaneous state. Finally, we found that although GABAergic inhibition shaped the spectrotemporal response to vocalizations in a nonlinear fashion, it did not affect the neural code needed to discriminate vocalizations, based either on spiking patterns or on firing rate. Overall, our results emphasize that even if GABAergic inhibition generally decreases the firing rate, it does so while maintaining or extending the abilities of neurons in the IC to code the wide variety of sounds that mammals are exposed to in their daily lives.NEW & NOTEWORTHY GABAergic inhibition adds nonlinearity to neuronal response curves. This increases the neuronal range of evoked firing rate by reducing baseline firing. GABAergic inhibition prevents bursting responses from neurons in a spontaneous state, reducing noise in the temporal coding of the neuron. This could result in improved signal transmission to the cortex.
Collapse
Affiliation(s)
- Boris Gourévitch
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France.,CNRS, France
| | - Elena J Mahrt
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Warren Bakay
- Institut de l'Audition, Institut Pasteur, INSERM, Sorbonne Université, F-75012 Paris, France
| | - Cameron Elde
- School of Biological Sciences, Washington State University, Vancouver, Washington
| | - Christine V Portfors
- School of Biological Sciences, Washington State University, Vancouver, Washington
| |
Collapse
|
11
|
Belkhiria C, Vergara RC, San Martín S, Leiva A, Marcenaro B, Martinez M, Delgado C, Delano PH. Cingulate Cortex Atrophy Is Associated With Hearing Loss in Presbycusis With Cochlear Amplifier Dysfunction. Front Aging Neurosci 2019; 11:97. [PMID: 31080411 PMCID: PMC6497796 DOI: 10.3389/fnagi.2019.00097] [Citation(s) in RCA: 41] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2018] [Accepted: 04/10/2019] [Indexed: 12/14/2022] Open
Abstract
Age-related hearing loss is associated with cognitive decline and has been proposed as a risk factor for dementia. However, the mechanisms that relate hearing loss to cognitive decline remain elusive. Here, we propose that the impairment of the cochlear amplifier mechanism is associated with structural brain changes and cognitive impairment. Ninety-six subjects aged over 65 years old (63 female and 33 male) were evaluated using brain magnetic resonance imaging, neuropsychological and audiological assessments, including distortion product otoacoustic emissions as a measure of the cochlear amplifier function. All the analyses were adjusted by age, gender and education. The group with cochlear amplifier dysfunction showed greater brain atrophy in the cingulate cortex and in the parahippocampus. In addition, the atrophy of the cingulate cortex was associated with cognitive impairment in episodic and working memories and in language and visuoconstructive abilities. We conclude that the neural abnormalities observed in presbycusis subjects with cochlear amplifier dysfunction extend beyond core auditory network and are associated with cognitive decline in multiple domains. These results suggest that a cochlear amplifier dysfunction in presbycusis is an important mechanism relating hearing impairments to brain atrophy in the extended network of effortful hearing.
Collapse
Affiliation(s)
- Chama Belkhiria
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Rodrigo C Vergara
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Simón San Martín
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Alexis Leiva
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Bruno Marcenaro
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile
| | - Melissa Martinez
- Department of Neurology and Neurosurgery, Clinical Hospital of the University of Chile, Santiago, Chile
| | - Carolina Delgado
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Department of Neurology and Neurosurgery, Clinical Hospital of the University of Chile, Santiago, Chile
| | - Paul H Delano
- Department of Neuroscience, Faculty of Medicine, University of Chile, Santiago, Chile.,Biomedical Neuroscience Institute, Faculty of Medicine, University of Chile, Santiago, Chile.,Department of Otolaryngology, Clinical Hospital of the University of Chile, Santiago, Chile
| |
Collapse
|
12
|
Walker KM, Gonzalez R, Kang JZ, McDermott JH, King AJ. Across-species differences in pitch perception are consistent with differences in cochlear filtering. eLife 2019; 8:41626. [PMID: 30874501 PMCID: PMC6435318 DOI: 10.7554/elife.41626] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2018] [Accepted: 03/14/2019] [Indexed: 11/13/2022] Open
Abstract
Pitch perception is critical for recognizing speech, music and animal vocalizations, but its neurobiological basis remains unsettled, in part because of divergent results across species. We investigated whether species-specific differences exist in the cues used to perceive pitch and whether these can be accounted for by differences in the auditory periphery. Ferrets accurately generalized pitch discriminations to untrained stimuli whenever temporal envelope cues were robust in the probe sounds, but not when resolved harmonics were the main available cue. By contrast, human listeners exhibited the opposite pattern of results on an analogous task, consistent with previous studies. Simulated cochlear responses in the two species suggest that differences in the relative salience of the two pitch cues can be attributed to differences in cochlear filter bandwidths. The results support the view that cross-species variation in pitch perception reflects the constraints of estimating a sound’s fundamental frequency given species-specific cochlear tuning.
Collapse
Affiliation(s)
- Kerry Mm Walker
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Ray Gonzalez
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States
| | - Joe Z Kang
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, United States.,Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, United States
| | - Andrew J King
- Department of Physiology, Anatomy & Genetics, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Bowling T, Meaud J. Forward and Reverse Waves: Modeling Distortion Products in the Intracochlear Fluid Pressure. Biophys J 2019; 114:747-757. [PMID: 29414719 DOI: 10.1016/j.bpj.2017.12.005] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2017] [Revised: 12/06/2017] [Accepted: 12/12/2017] [Indexed: 10/18/2022] Open
Abstract
Distortion product otoacoustic emissions are sounds that are emitted by the cochlea due to the nonlinearity of the outer hair cells. These emissions play an important role both in clinical settings and research laboratories. However, how distortion products propagate from their generation location to the middle ear remains unclear; whether distortion products propagate as a slow reverse traveling wave, or as a fast compression wave, through the cochlear fluid has been debated. In this article, we evaluate the contributions of the slow reverse wave and fast compression wave to the propagation of intracochlear distortion products using a physiologically based nonlinear model of the gerbil cochlea. This model includes a 3D two-duct model of the intracochlear fluid and a realistic model of outer hair cell biophysics. Simulations of the distortion products in the cochlear fluid pressure in response to a two-tone stimulus are compared with published in vivo experimental results. Whereas experiments have characterized distortion products at a limited number of locations, this model provides a complete description of the fluid pressure at all locations in the cochlear ducts. As in experiments, the spatial variations of the distortion products in the fluid pressure have some similarities with what is observed in response to a pure tone. Analysis of the fluid pressure demonstrates that although a fast wave component is generated, the slow wave component dominates the response. Decomposition of the model simulations into forward and reverse wave components shows that a slow forward propagating wave is generated due to the reflection of the slow reverse wave at the stapes. Wave interference between the reverse and forward components sometimes complicates the analysis of distortion products propagation using measurements at a few locations.
Collapse
Affiliation(s)
- Thomas Bowling
- G.W.W. School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia
| | - Julien Meaud
- G.W.W. School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia; Petit Institute for Bioengineering and Bioscience, Georgia Institute of Technology, Atlanta, Georgia.
| |
Collapse
|
14
|
Dragicevic CD, Marcenaro B, Navarrete M, Robles L, Delano PH. Oscillatory infrasonic modulation of the cochlear amplifier by selective attention. PLoS One 2019; 14:e0208939. [PMID: 30615632 PMCID: PMC6322828 DOI: 10.1371/journal.pone.0208939] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2018] [Accepted: 11/26/2018] [Indexed: 11/18/2022] Open
Abstract
Evidence shows that selective attention to visual stimuli modulates the gain of cochlear responses, probably through auditory-cortex descending pathways. At the cerebral cortex level, amplitude and phase changes of neural oscillations have been proposed as a correlate of selective attention. However, whether sensory receptors are also influenced by the oscillatory network during attention tasks remains unknown. Here, we searched for oscillatory attention-related activity at the cochlear receptor level in humans. We used an alternating visual/auditory selective attention task and measured electroencephalographic activity simultaneously to distortion product otoacoustic emissions (a measure of cochlear receptor-cell activity). In order to search for cochlear oscillatory activity, the otoacoustic emission signal, was included as an additional channel in the electroencephalogram analyses. This method allowed us to evaluate dynamic changes in cochlear oscillations within the same range of frequencies (1–35 Hz) in which cognitive effects are commonly observed in electroencephalogram works. We found the presence of low frequency (<10 Hz) brain and cochlear amplifier oscillations during selective attention to visual and auditory stimuli. Notably, switching between auditory and visual attention modulates the amplitude and the temporal order of brain and inner ear oscillations. These results extend the role of the oscillatory activity network during cognition in neural systems to the receptor level.
Collapse
Affiliation(s)
| | - Bruno Marcenaro
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Marcela Navarrete
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Luis Robles
- Programa de Fisiología y Biofísica, ICBM, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Paul H. Delano
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Otolaryngology Department, Clinical Hospital, Universidad de Chile, Santiago, Chile
- Biomedical Neuroscience Institute, Universidad de Chile, Santiago, Chile
- * E-mail:
| |
Collapse
|
15
|
Albert JT, Kozlov AS. Comparative Aspects of Hearing in Vertebrates and Insects with Antennal Ears. Curr Biol 2017; 26:R1050-R1061. [PMID: 27780047 DOI: 10.1016/j.cub.2016.09.017] [Citation(s) in RCA: 32] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
The evolution of hearing in terrestrial animals has resulted in remarkable adaptations enabling exquisitely sensitive sound detection by the ear and sophisticated sound analysis by the brain. In this review, we examine several such characteristics, using examples from insects and vertebrates. We focus on two strong and interdependent forces that have been shaping the auditory systems across taxa: the physical environment of auditory transducers on the small, subcellular scale, and the sensory-ecological environment within which hearing happens, on a larger, evolutionary scale. We briefly discuss acoustical feature selectivity and invariance in the central auditory system, highlighting a major difference between insects and vertebrates as well as a major similarity. Through such comparisons within a sensory ecological framework, we aim to emphasize general principles underlying acute sensitivity to airborne sounds.
Collapse
Affiliation(s)
- Joerg T Albert
- UCL Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK.
| | - Andrei S Kozlov
- Department of Bioengineering, Imperial College London, London SW7 2AZ, UK.
| |
Collapse
|
16
|
Milekhina ON, Nechaev DI, Popov VV, Supin AY. Compressive Nonlinearity in the Auditory System: Manifestation in the Action of Complex Sound Signals. BIOL BULL+ 2017. [DOI: 10.1134/s1062359017060073] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
17
|
Norman-Haignere S, McDermott JH. Distortion products in auditory fMRI research: Measurements and solutions. Neuroimage 2016; 129:401-413. [PMID: 26827809 DOI: 10.1016/j.neuroimage.2016.01.050] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2015] [Revised: 01/05/2016] [Accepted: 01/22/2016] [Indexed: 11/19/2022] Open
Abstract
Nonlinearities in the cochlea can introduce audio frequencies that are not present in the sound signal entering the ear. Known as distortion products (DPs), these added frequencies complicate the interpretation of auditory experiments. Sound production systems also introduce distortion via nonlinearities, a particular concern for fMRI research because the Sensimetrics earphones widely used for sound presentation are less linear than most high-end audio devices (due to design constraints). Here we describe the acoustic and neural effects of cochlear and earphone distortion in the context of fMRI studies of pitch perception, and discuss how their effects can be minimized with appropriate stimuli and masking noise. The amplitude of cochlear and Sensimetrics earphone DPs were measured for a large collection of harmonic stimuli to assess effects of level, frequency, and waveform amplitude. Cochlear DP amplitudes were highly sensitive to the absolute frequency of the DP, and were most prominent at frequencies below 300 Hz. Cochlear DPs could thus be effectively masked by low-frequency noise, as expected. Earphone DP amplitudes, in contrast, were highly sensitive to both stimulus and DP frequency (due to prominent resonances in the earphone's transfer function), and their levels grew more rapidly with increasing stimulus level than did cochlear DP amplitudes. As a result, earphone DP amplitudes often exceeded those of cochlear DPs. Using fMRI, we found that earphone DPs had a substantial effect on the response of pitch-sensitive cortical regions. In contrast, cochlear DPs had a small effect on cortical fMRI responses that did not reach statistical significance, consistent with their lower amplitudes. Based on these findings, we designed a set of pitch stimuli optimized for identifying pitch-responsive brain regions using fMRI. These stimuli robustly drive pitch-responsive brain regions while producing minimal cochlear and earphone distortion, and will hopefully aid fMRI researchers in avoiding distortion confounds.
Collapse
|
18
|
Wang S, Koickal TJ, Hamilton A, Cheung R, Smith LS. A Bio-Realistic Analog CMOS Cochlea Filter With High Tunability and Ultra-Steep Roll-Off. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2015; 9:297-311. [PMID: 25099631 DOI: 10.1109/tbcas.2014.2328321] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
This paper presents the design and experimental results of a cochlea filter in analog very large scale integration (VLSI) which highly resembles physiologically measured response of the mammalian cochlea. The filter consists of three specialized sub-filter stages which respectively provide passive response in low frequencies, actively tunable response in mid-band frequencies and ultra-steep roll-off at transition frequencies from pass-band to stop-band. The sub-filters are implemented in balanced ladder topology using floating active inductors. Measured results from the fabricated chip show that wide range of mid-band tuning including gain tuning of over 20 dB, Q factor tuning from 2 to 19 as well as the bio-realistic center frequency shift are achieved by adjusting only one circuit parameter. Besides, the filter has an ultra-steep roll-off reaching over 300 dB/dec. By changing biasing currents, the filter can be configured to operate with center frequencies from 31 Hz to 8 kHz. The filter is 9th order, consumes 59.5 ∼ 90.0 μW power and occupies 0.9 mm2 chip area. A parallel bank of the proposed filter can be used as the front-end in hearing prosthesis devices, speech processors as well as other bio-inspired auditory systems owing to its bio-realistic behavior, low power consumption and small size.
Collapse
|
19
|
Gao PP, Zhang JW, Chan RW, Leong ATL, Wu EX. BOLD fMRI study of ultrahigh frequency encoding in the inferior colliculus. Neuroimage 2015; 114:427-37. [PMID: 25869860 DOI: 10.1016/j.neuroimage.2015.04.007] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2014] [Revised: 03/02/2015] [Accepted: 04/02/2015] [Indexed: 01/23/2023] Open
Abstract
Many vertebrates communicate with ultrahigh frequency (UHF) vocalizations to limit auditory detection by predators. The mechanisms underlying the neural encoding of such UHF sounds may provide important insights for understanding neural processing of other complex sounds (e.g. human speeches). In the auditory system, sound frequency is normally encoded topographically as tonotopy, which, however, contains very limited representation of UHFs in many species. Instead, electrophysiological studies suggested that two neural mechanisms, both exploiting the interactions between frequencies, may contribute to UHF processing. Neurons can exhibit excitatory or inhibitory responses to a tone when another UHF tone is presented simultaneously (combination sensitivity). They can also respond to such stimulation if they are tuned to the frequency of the cochlear-generated distortion products of the two tones, e.g. their difference frequency (cochlear distortion). Both mechanisms are present in an early station of the auditory pathway, the midbrain inferior colliculus (IC). Currently, it is unclear how prevalent the two mechanisms are and how they are functionally integrated in encoding UHFs. This study investigated these issues with large-view BOLD fMRI in rat auditory system, particularly the IC. UHF vocalizations (above 40kHz), but not pure tones at similar frequencies (45, 55, 65, 75kHz), evoked robust BOLD responses in multiple auditory nuclei, including the IC, reinforcing the sensitivity of the auditory system to UHFs despite limited representation in tonotopy. Furthermore, BOLD responses were detected in the IC when a pair of UHF pure tones was presented simultaneously (45 & 55kHz, 55 & 65kHz, 45 & 65kHz, 45 & 75kHz). For all four pairs, a cluster of voxels in the ventromedial side always showed the strongest responses, displaying combination sensitivity. Meanwhile, voxels in the dorsolateral side that showed strongest secondary responses to each pair of UHF pure tones also showed the strongest responses to a pure tone at their difference frequency, suggesting that they are sensitive to cochlear distortion. These BOLD fMRI results indicated that combination sensitivity and cochlear distortion are employed by large but spatially distinctive neuron populations in the IC to represent UHFs. Our imaging findings provided insights for understanding sound feature encoding in the early stage of the auditory pathway.
Collapse
Affiliation(s)
- Patrick P Gao
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Jevin W Zhang
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Russell W Chan
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Alex T L Leong
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China
| | - Ed X Wu
- Laboratory of Biomedical Imaging and Signal Processing, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Electrical and Electronic Engineering, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Anatomy, The University of Hong Kong, Pokfulam, Hong Kong SAR, China; Department of Medicine, The University of Hong Kong, Pokfulam, Hong Kong SAR, China.
| |
Collapse
|
20
|
Abstract
Uniquely among human senses, hearing is not simply a passive response to stimulation. Our auditory system is instead enhanced by an active process in cochlear hair cells that amplifies acoustic signals several hundred-fold, sharpens frequency selectivity and broadens the ear's dynamic range. Active motility of the mechanoreceptive hair bundles underlies the active process in amphibians and some reptiles; in mammals, this mechanism operates in conjunction with prestin-based somatic motility. Both individual hair bundles and the cochlea as a whole operate near a dynamical instability, the Hopf bifurcation, which accounts for the cardinal features of the active process.
Collapse
|
21
|
Reichenbach T, Hudspeth AJ. The physics of hearing: fluid mechanics and the active process of the inner ear. REPORTS ON PROGRESS IN PHYSICS. PHYSICAL SOCIETY (GREAT BRITAIN) 2014; 77:076601. [PMID: 25006839 DOI: 10.1088/0034-4885/77/7/076601] [Citation(s) in RCA: 74] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Most sounds of interest consist of complex, time-dependent admixtures of tones of diverse frequencies and variable amplitudes. To detect and process these signals, the ear employs a highly nonlinear, adaptive, real-time spectral analyzer: the cochlea. Sound excites vibration of the eardrum and the three miniscule bones of the middle ear, the last of which acts as a piston to initiate oscillatory pressure changes within the liquid-filled chambers of the cochlea. The basilar membrane, an elastic band spiraling along the cochlea between two of these chambers, responds to these pressures by conducting a largely independent traveling wave for each frequency component of the input. Because the basilar membrane is graded in mass and stiffness along its length, however, each traveling wave grows in magnitude and decreases in wavelength until it peaks at a specific, frequency-dependent position: low frequencies propagate to the cochlear apex, whereas high frequencies culminate at the base. The oscillations of the basilar membrane deflect hair bundles, the mechanically sensitive organelles of the ear's sensory receptors, the hair cells. As mechanically sensitive ion channels open and close, each hair cell responds with an electrical signal that is chemically transmitted to an afferent nerve fiber and thence into the brain. In addition to transducing mechanical inputs, hair cells amplify them by two means. Channel gating endows a hair bundle with negative stiffness, an instability that interacts with the motor protein myosin-1c to produce a mechanical amplifier and oscillator. Acting through the piezoelectric membrane protein prestin, electrical responses also cause outer hair cells to elongate and shorten, thus pumping energy into the basilar membrane's movements. The two forms of motility constitute an active process that amplifies mechanical inputs, sharpens frequency discrimination, and confers a compressive nonlinearity on responsiveness. These features arise because the active process operates near a Hopf bifurcation, the generic properties of which explain several key features of hearing. Moreover, when the gain of the active process rises sufficiently in ultraquiet circumstances, the system traverses the bifurcation and even a normal ear actually emits sound. The remarkable properties of hearing thus stem from the propagation of traveling waves on a nonlinear and excitable medium.
Collapse
|
22
|
Tchumatchenko T, Reichenbach T. A cochlear-bone wave can yield a hearing sensation as well as otoacoustic emission. Nat Commun 2014; 5:4160. [PMID: 24954736 PMCID: PMC4083418 DOI: 10.1038/ncomms5160] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2013] [Accepted: 05/19/2014] [Indexed: 11/16/2022] Open
Abstract
A hearing sensation arises when the elastic basilar membrane inside the cochlea vibrates. The basilar membrane is typically set into motion through airborne sound that displaces the middle ear and induces a pressure difference across the membrane. A second, alternative pathway exists, however: stimulation of the cochlear bone vibrates the basilar membrane as well. This pathway, referred to as bone conduction, is increasingly used in headphones that bypass the ear canal and the middle ear. Furthermore, otoacoustic emissions, sounds generated inside the cochlea and emitted therefrom, may not involve the usual wave on the basilar membrane, suggesting that additional cochlear structures are involved in their propagation. Here we describe a novel propagation mode within the cochlea that emerges through deformation of the cochlear bone. Through a mathematical and computational approach we demonstrate that this propagation mode can explain bone conduction as well as numerous properties of otoacoustic emissions. Novel headphone technology employs bone conduction to enable hearing, but the mechanism behind this remains unclear. Tchumatchenko and Reichenbach now show that bone conduction and subsequent hearing and otoacoustic emissions are in part due to deformation of the cochlear bone.
Collapse
Affiliation(s)
- Tatjana Tchumatchenko
- Theory of Neural Dynamics Group, Max Planck Institute for Brain Research, Max-von-Laue Strasse 4, 60438 Frankfurt am Main, Germany
| | - Tobias Reichenbach
- Department of Bioengineering, Imperial College London, South Kensington Campus, London SW7 2AZ, UK
| |
Collapse
|
23
|
Abstract
A fundamental structure of sounds encountered in the natural environment is the harmonicity. Harmonicity is an essential component of music found in all cultures. It is also a unique feature of vocal communication sounds such as human speech and animal vocalizations. Harmonics in sounds are produced by a variety of acoustic generators and reflectors in the natural environment, including vocal apparatuses of humans and animal species as well as music instruments of many types. We live in an acoustic world full of harmonicity. Given the widespread existence of the harmonicity in many aspects of the hearing environment, it is natural to expect that it be reflected in the evolution and development of the auditory systems of both humans and animals, in particular the auditory cortex. Recent neuroimaging and neurophysiology experiments have identified regions of non-primary auditory cortex in humans and non-human primates that have selective responses to harmonic pitches. Accumulating evidence has also shown that neurons in many regions of the auditory cortex exhibit characteristic responses to harmonically related frequencies beyond the range of pitch. Together, these findings suggest that a fundamental organizational principle of auditory cortex is based on the harmonicity. Such an organization likely plays an important role in music processing by the brain. It may also form the basis of the preference for particular classes of music and voice sounds.
Collapse
Affiliation(s)
- Xiaoqin Wang
- Department of Biomedical Engineering, Johns Hopkins University School of MedicineBaltimore, MD, USA
- Tsinghua-Johns Hopkins Joint Center for Biomedical Engineering Research and Department of Biomedical Engineering, Tsinghua UniversityBeijing, China
| |
Collapse
|
24
|
Portfors CV, Roberts PD. Mismatch of structural and functional tonotopy for natural sounds in the auditory midbrain. Neuroscience 2013; 258:192-203. [PMID: 24252321 DOI: 10.1016/j.neuroscience.2013.11.012] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2013] [Revised: 11/06/2013] [Accepted: 11/06/2013] [Indexed: 11/24/2022]
Abstract
Neurons in the auditory system are spatially organized in their responses to pure tones, and this tonotopy is expected to predict neuronal responses to more complex sounds such as vocalizations. We presented vocalizations with low-, medium- and high-frequency content to determine if selectivity of neurons in the inferior colliculus (IC) of mice respects the tonotopic spatial structure. Tonotopy in the IC predicts that neurons located in dorsal regions should only respond to low-frequency vocalizations and only neurons located in ventral regions should respond to high-frequency vocalizations. We found that responses to vocalizations were independent of location, and many neurons in the dorsal, low-frequency region of IC responded to high-frequency vocalizations. To test whether this was due to dorsal neurons having broad frequency tuning curves, we convolved each neuron's frequency tuning curve with each vocalization, and found that the tuning curves were not good predictors of the actual neural responses to the vocalizations. We then used a nonlinear model of signal transduction in the cochlea that generates distortion products to predict neural responses to the vocalizations. We found that these predictions more closely matched the actual neural responses. Our findings suggest that the cochlea distorts the frequency representation in vocalizations and some neurons use this distorted representation to encode the vocalizations.
Collapse
Affiliation(s)
- C V Portfors
- School of Biological Sciences, Washington State University, Vancouver, WA 98686, USA.
| | - P D Roberts
- Oregon Health & Science University, Portland, OR 97239, USA
| |
Collapse
|
25
|
Abstract
To enhance weak sounds while compressing the dynamic intensity range, auditory sensory cells amplify sound-induced vibrations in a nonlinear, intensity-dependent manner. In the course of this process, instantaneous waveform distortion is produced, with two conspicuous kinds of interwoven consequences, the introduction of new sound frequencies absent from the original stimuli, which are audible and detectable in the ear canal as otoacoustic emissions, and the possibility for an interfering sound to suppress the response to a probe tone, thereby enhancing contrast among frequency components. We review how the diverse manifestations of auditory nonlinearity originate in the gating principle of their mechanoelectrical transduction channels; how they depend on the coordinated opening of these ion channels ensured by connecting elements; and their links to the dynamic behavior of auditory sensory cells. This paper also reviews how the complex properties of waves traveling through the cochlea shape the manifestations of auditory nonlinearity. Examination methods based on the detection of distortions open noninvasive windows on the modes of activity of mechanosensitive structures in auditory sensory cells and on the distribution of sites of nonlinearity along the cochlear tonotopic axis, helpful for deciphering cochlear molecular physiology in hearing-impaired animal models. Otoacoustic emissions enable fast tests of peripheral sound processing in patients. The study of auditory distortions also contributes to the understanding of the perception of complex sounds.
Collapse
Affiliation(s)
- Paul Avan
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| | - Béla Büki
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| | - Christine Petit
- Laboratory of Neurosensory Biophysics, University of Auvergne, School of Medicine, Clermont-Ferrand, France; Institut National de la Santé et de la Recherche Médicale (INSERM), UMR 1107, Clermont-Ferrand, France; Centre Jean Perrin, Clermont-Ferrand, France; Department of Otolaryngology, County Hospital, Krems an der Donau, Austria; Laboratory of Genetics and Physiology of Hearing, Department of Neuroscience, Institut Pasteur, Paris, France; Collège de France, Genetics and Cell Physiology, Paris, France
| |
Collapse
|
26
|
Reichenbach T, Stefanovic A, Nin F, Hudspeth AJ. Waves on Reissner's membrane: a mechanism for the propagation of otoacoustic emissions from the cochlea. Cell Rep 2013; 1:374-84. [PMID: 22580949 PMCID: PMC3348656 DOI: 10.1016/j.celrep.2012.02.013] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Sound is detected and converted into electrical signals within the ear. The cochlea not only acts as a passive detector of sound, however, but can also produce tones itself. These otoacoustic emissions are a striking manifestation of the cochlea's mechanical active process. A controversy remains of how these mechanical signals propagate back to the middle ear, from which they are emitted as sound. Here, we combine theoretical and experimental studies to show that mechanical signals can be transmitted by waves on Reissner's membrane, an elastic structure within the cochlea. We develop a theory for wave propagation on Reissner's membrane and its role in otoacoustic emissions. Employing a scanning laser interferometer, we measure traveling waves on Reissner's membrane in the gerbil, guinea pig, and chinchilla. The results are in accord with the theory and thus support a role for Reissner's membrane in otoacoustic emissions.
Collapse
Affiliation(s)
- Tobias Reichenbach
- Howard Hughes Medical Institute and Laboratory of Sensory Neuroscience, The Rockefeller University, New York, NY 10065-6399, USA
| | | | | | | |
Collapse
|
27
|
Vetešník A, Gummer AW. Transmission of cochlear distortion products as slow waves: a comparison of experimental and model data. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:3914-34. [PMID: 22559367 DOI: 10.1121/1.3699207] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
There is a long-lasting question of how distortion products (DPs) arising from nonlinear amplification processes in the cochlea are transmitted from their generation sites to the stapes. Two hypotheses have been proposed: (1) the slow-wave hypothesis whereby transmission is via the transverse pressure difference across the cochlear partition and (2) the fast-wave hypothesis proposing transmission via longitudinal compression waves. Ren with co-workers have addressed this topic experimentally by measuring the spatial vibration pattern of the basilar membrane (BM) in response to two tones of frequency f(1) and f(2). They interpreted the observed negative phase slopes of the stationary BM vibrations at the cubic distortion frequency f(DP) = 2f(1) - f(2) as evidence for the fast-wave hypothesis. Here, using a physically based model, it is shown that their phase data is actually in accordance with the slow-wave hypothesis. The analysis is based on a frequency-domain formulation of the two-dimensional motion equation of a nonlinear hydrodynamic cochlea model. Application of the analysis to their experimental data suggests that the measurement sites of negative phase slope were located at or apical to the DP generation sites. Therefore, current experimental and theoretical evidence supports the slow-wave hypothesis. Nevertheless, the analysis does not allow rejection of the fast-wave hypothesis.
Collapse
Affiliation(s)
- Aleš Vetešník
- Czech Technical University in Prague, Faculty of Nuclear Sciences and Physical Engineering, Department of Nuclear Chemistry, Břehová 7, 115 19 Prague 1, Czech Republic
| | | |
Collapse
|
28
|
Wotton JM, Ferragamo MJ. A model of anuran auditory periphery reveals frequency-dependent adaptation to be a contributing mechanism for two-tone suppression and amplitude modulation coding. Hear Res 2011; 280:109-21. [PMID: 21565263 DOI: 10.1016/j.heares.2011.04.014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/11/2011] [Revised: 04/17/2011] [Accepted: 04/26/2011] [Indexed: 11/25/2022]
Abstract
Anuran auditory nerve fibers (ANF) tuned to low frequencies display unusual frequency-dependent adaptation which results in a more phasic response to signals above best frequency (BF) and a more tonic response to signals below. A network model of the first two layers of the anuran auditory system was used to test the contribution of this dynamic peripheral adaptation on two-tone suppression and amplitude modulation (AM) tuning. The model included a peripheral sandwich component, leaky-integrate-and-fire cells and adaptation was implemented by means of a non-linear increase in threshold weighted by the signal frequency. The results of simulations showed that frequency-dependent adaptation was both necessary and sufficient to produce high-frequency-side two-tone suppression for the ANF and cells of the dorsal medullary nucleus (DMN). It seems likely that both suppression and this dynamic adaptation share a common mechanism. The response of ANFs to AM signals was influenced by adaptation and carrier frequency. Vector strength synchronization to an AM signal improved with increased adaptation. The spike rate response to a carrier at BF was the expected flat function with AM rate. However, for non-BF carrier frequencies the response showed a weak band-pass pattern due to the influence of signal sidebands and adaptation. The DMN received inputs from three ANFs and when the frequency tuning of inputs was near the carrier, then the rate response was a low-pass or all-pass shape. When most of the inputs were biased above or below the carrier, then band-pass responses were observed. Frequency-dependent adaptation enhanced the band-pass tuning for AM rate, particularly when the response of the inputs was predominantly phasic for a given carrier. Different combinations of inputs can therefore bias a DMN cell to be especially well suited to detect specific ranges of AM rates for a particular carrier frequency. Such selection of inputs would clearly be advantageous to the frog in recognizing distinct spectral and temporal parameters in communication calls.
Collapse
Affiliation(s)
- J M Wotton
- Program in Neuroscience, Gustavus Adolphus College, 800 West College Ave, St. Peter, MN 56082, USA.
| | | |
Collapse
|
29
|
Rodriguez J, Neely ST. Temporal aspects of suppression in distortion-product otoacoustic emissions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 129:3082-3089. [PMID: 21568411 PMCID: PMC3108389 DOI: 10.1121/1.3575553] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/28/2011] [Revised: 03/11/2011] [Accepted: 03/15/2011] [Indexed: 05/30/2023]
Abstract
This study examined the time course of cochlear suppression using a tone-burst suppressor to measure decrement of distortion-product otoacoustic emissions (DPOAEs). Seven normal-hearing subjects with ages ranging from 19 to 28 yr participated in the study. Each subject had audiometric thresholds ≤ 15 dB HL [re ANSI (2004) Specifications for Audiometers] for standard octave and inter-octave frequencies from 0.25 to 8 kHz. DPOAEs were elicited by primary tones with f(2) = 4.0 kHz and f(1) = 3.333 kHz (f(2)/f(1) = 1.2). For the f(2), L(2) combination, suppression was measured for three suppressor frequencies: One suppressor below f(2) (3.834 kHz) and two above f(2) (4.166 and 4.282 kHz) at three levels (55, 60, and 65 dB SPL). DPOAE decrement as a function of L(3) for the tone-burst suppressor was similar to decrements obtained with longer duration suppressors. Onset- and setoff- latencies were ≤ 4 ms, in agreement with previous physiological findings in auditory-nerve fiber studies that suggest suppression results from a nearly instantaneous compression of the waveform. Persistence of suppression was absent for the below-frequency suppressor (f(3) = 3.834 kHz) and was ≤ 3 ms for the two above-frequency suppressors (f(3) = 4.166 and 4.282 kHz).
Collapse
Affiliation(s)
- Joyce Rodriguez
- Starkey Hearing Research Center, 2150 Shattuck Avenue, Suite 408, Berkeley, California 94704, USA.
| | | |
Collapse
|
30
|
Huang J, Holt LL. Evidence for the central origin of lexical tone normalization (L). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 129:1145-8. [PMID: 21428475 PMCID: PMC3078024 DOI: 10.1121/1.3543994] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/05/2010] [Accepted: 12/29/2010] [Indexed: 05/30/2023]
Abstract
Huang and Holt [(2009). J. Acoust. Soc. Am. 125, 3983-3994] suggest that listeners may dynamically tune lexical tone perception via general auditory sensitivity to the mean f0 of the preceding context, effectively normalizing pitch differences across talkers. The present experiments further examine the effect using the missing-f0 phenomenon as a means of determining the level of auditory processing at which lexical tone normalization occurs. Speech contexts filtered to remove or mask low-frequency f0 energy elicited contrastive context effects. Central, rather than peripheral, auditory processes may be responsible for the context-dependence that has been considered to be lexical tone normalization.
Collapse
Affiliation(s)
- Jingyuan Huang
- Department of Psychology and the Center for the Neural Basis of Cognition, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, Pennsylvania 15213, USA.
| | | |
Collapse
|
31
|
He W, Fridberger A, Porsov E, Ren T. Fast reverse propagation of sound in the living cochlea. Biophys J 2010; 98:2497-505. [PMID: 20513393 DOI: 10.1016/j.bpj.2010.03.003] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2009] [Revised: 02/24/2010] [Accepted: 03/03/2010] [Indexed: 10/19/2022] Open
Abstract
The auditory sensory organ, the cochlea, not only detects but also generates sounds. Such sounds, otoacoustic emissions, are widely used for diagnosis of hearing disorders and to estimate cochlear nonlinearity. However, the fundamental question of how the otoacoustic emission exits the cochlea remains unanswered. In this study, emissions were provoked by two tones with a constant frequency ratio, and measured as vibrations at the basilar membrane and at the stapes, and as sound pressure in the ear canal. The propagation direction and delay of the emission were determined by measuring the phase difference between basilar membrane and stapes vibrations. These measurements show that cochlea-generated sound arrives at the stapes earlier than at the measured basilar membrane location. Data also show that basilar membrane vibration at the emission frequency is similar to that evoked by external tones. These results conflict with the backward-traveling-wave theory and suggest that at low and intermediate sound levels, the emission exits the cochlea predominantly through the cochlear fluids.
Collapse
Affiliation(s)
- Wenxuan He
- Oregon Hearing Research Center, Department of Otolaryngology and Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon, USA
| | | | | | | |
Collapse
|
32
|
|
33
|
Extraction of sources of distortion product otoacoustic emissions by onset-decomposition. Hear Res 2009; 256:21-38. [DOI: 10.1016/j.heares.2009.06.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/09/2009] [Revised: 05/28/2009] [Accepted: 06/03/2009] [Indexed: 11/22/2022]
|
34
|
Abstract
By measuring the auditory brainstem response to two musical intervals, the major sixth (E3 and G2) and the minor seventh (E3 and F#2), we found that musicians have a more specialized sensory system for processing behaviorally relevant aspects of sound. Musicians had heightened responses to the harmonics of the upper tone (E), as well as certain combination tones (sum tones) generated by nonlinear processing in the auditory system. In music, the upper note is typically carried by the upper voice, and the enhancement of the upper tone likely reflects musicians' extensive experience attending to the upper voice. Neural phase locking to the temporal periodicity of the amplitude-modulated envelope, which underlies the perception of musical harmony, was also more precise in musicians than nonmusicians. Neural enhancements were strongly correlated with years of musical training, and our findings, therefore, underscore the role that long-term experience with music plays in shaping auditory sensory encoding.
Collapse
|
35
|
Turcanu D, Dalhoff E, Müller M, Zenner HP, Gummer AW. Accuracy of velocity distortion product otoacoustic emissions for estimating mechanically based hearing loss. Hear Res 2009; 251:17-28. [PMID: 19233253 DOI: 10.1016/j.heares.2009.02.005] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/18/2008] [Revised: 02/02/2009] [Accepted: 02/02/2009] [Indexed: 10/21/2022]
Abstract
Distortion product otoacoustic emissions (DPOAEs) measured as vibration of the human eardrum have been successfully used to estimate hearing threshold. The estimates have proved more accurate than similar methods using sound-pressure DPOAEs. Nevertheless, the estimation accuracy of the new technique might have been influenced by endogenous noise, such as heart beat, breathing and swallowing. Here, we investigate in an animal model to what extent the accuracy of the threshold estimation technique using velocity-DPOAEs might be improved by reducing noise sources. Velocity-DPOAE I/O functions were measured in normal and hearing-impaired anaesthetized guinea pigs. Hearing loss was either conductive or induced by furosemide injection. The estimated distortion product threshold (EDPT) obtained by extrapolation of the I/O function to the abscissa was found to linearly correlate with the compound action potential threshold at the f(2) frequency, provided that furosemide data were excluded. The standard deviation of the linear regression fit was 6 dB as opposed to 8 dB in humans, suggesting that this accuracy should be achievable in humans with appropriate improvement of signal-to-noise ratio. For the furosemide animals, the CAP threshold relative to the regression line provided an estimate of the functional loss of the inner hair cell system. For mechanical losses in the middle ear and/or cochlear amplifier, DPOAEs measured as velocity of the umbo promise an accuracy of hearing threshold estimation comparable to classical audiometry.
Collapse
Affiliation(s)
- Diana Turcanu
- Eberhard-Karls-University Tübingen, Department Otolaryngology, Section of Physiological Acoustics and Communication, Elfriede-Aulhorn-Strasse 5, Tübingen 72076, Germany.
| | | | | | | | | |
Collapse
|
36
|
Maia FCZE, Lavinsky L, Möllerke RO, Duarte MES, Pereira DP, Maia JE. Distortion product otoacoustic emissions in sheep before and after hyperinsulinemia induction. Braz J Otorhinolaryngol 2008; 74:181-7. [PMID: 18568194 PMCID: PMC9442103 DOI: 10.1016/s1808-8694(15)31086-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2006] [Accepted: 02/10/2007] [Indexed: 10/25/2022] Open
Abstract
UNLABELLED Transient evoked otoacoustic emissions and distortion product otoacoustic emissions have gained significant importance in the identification of cochlear alterations. AIM To record distortion product thresholds through the monitoring of otoacoustic emissions in normal conditions and in the presence of electrophysiologic changes in cochlear outer hair cells in sheep after hyperinsulinemia induction. MATERIAL AND METHODS Experimental study, with seven sheep in the control group and seven in the study group. Insulin and glucose concentrations were measured simultaneously for the recording of distortion product otoacoustic emission every 10 minutes, all the way to 90 minutes. The control group received saline solution, and the study group received a bolus injection of 0.1 U/kg of regular human insulin. RESULTS There was a significant reduction in distortion product thresholds in the study group when compared to the control group at frequencies greater than 1,500 Hz and after 60 minutes (P < 0.001). CONCLUSION This study established distortion product otoacoustic emission thresholds in sheep with constant reproducibility, demonstrating that the method is adequate for use in audiology and otology investigations. Results also fully confirm that acute hyperinsulinemia may cause important changes in these thresholds.
Collapse
|
37
|
Abstract
Normal hearing depends on sound amplification within the mammalian cochlea. The amplification, without which the auditory system is effectively deaf, can be traced to the correct functioning of a group of motile sensory hair cells, the outer hair cells of the cochlea. Acting like motor cells, outer hair cells produce forces that are driven by graded changes in membrane potential. The forces depend on the presence of a motor protein in the lateral membrane of the cells. This protein, known as prestin, is a member of a transporter superfamily SLC26. The functional and structural properties of prestin are described in this review. Whether outer hair cell motility might account for sound amplification at all frequencies is also a critical question and is reviewed here.
Collapse
Affiliation(s)
- Jonathan Ashmore
- Department of Physiology and UCL Ear Institute, University College London, London, United Kingdom.
| |
Collapse
|
38
|
Tan X, Wang X, Yang W, Xiao Z. First spike latency and spike count as functions of tone amplitude and frequency in the inferior colliculus of mice. Hear Res 2007; 235:90-104. [PMID: 18037595 DOI: 10.1016/j.heares.2007.10.002] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/06/2007] [Revised: 10/06/2007] [Accepted: 10/10/2007] [Indexed: 11/27/2022]
Abstract
Spike counts (SC) or, spike rate and first spike latency (FSL), are both used to evaluate the responses of neurons to amplitudes and frequencies of acoustic stimuli. However, it is unclear which one is more suitable as a parameter for evaluating the responses of neurons to acoustic amplitudes and frequencies, since systematic comparisons between SC and FSL tuned to different amplitudes and frequencies, are scarce. This study systematically compared the precision and stability (i.e., the resolution and the coefficient variation, CV) of SC- and FSL-function as frequencies and amplitudes in the inferior colliculus of mice. The results showed that: (1) the SC-amplitude functions were of diverse shape (monotonic, nonmonotonic and saturated) whereas the FSL-amplitude functions were in close registration, in which FSL decreased with the increase of amplitude and no paradoxical (an increase in FSL with increasing amplitude) or constant (an independence of FSL on amplitude) neuron was observed; (2) the discriminability (resolution) of differences in amplitude and frequency based on FSL are higher than those based on SC; (3) the CVs of FSL for low amplitude stimuli were smaller than those of SC; (4) the fraction of neurons for which BF=CF (within +/-500Hz) obtained from FSL was higher than that from SC at any amplitude of sound. Therefore, SC and FSL may vary, independent from each other and represent different parameters of an acoustic stimulus, but FSL with its precision and stability appears to be a better parameter than SC in evaluation of the response of a neuron to frequency and amplitude in mouse inferior colliculus.
Collapse
Affiliation(s)
- Xiaodong Tan
- Physiology Department, Basic Medical School, Southern Medical University, Guangzhou 510515, China
| | | | | | | |
Collapse
|
39
|
Purcell DW, Ross B, Picton TW, Pantev C. Cortical responses to the 2f1-f2 combination tone measured indirectly using magnetoencephalography. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2007; 122:992-1003. [PMID: 17672647 DOI: 10.1121/1.2751250] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
The simultaneous presentation of two tones with frequencies f(1) and f(2) causes the perception of several combination tones in addition to the original tones. The most prominent of these are at frequencies f(2)-f(1) and 2f(1)-f(2). This study measured human physiological responses to the 2f(1)-f(2) combination tone at 500 Hz caused by tones of 750 and 1000 Hz with intensities of 65 and 55 dB SPL, respectively. Responses were measured from the cochlea using the distortion product otoacoustic emission (DPOAE), and from the auditory cortex using the 40-Hz steady-state magnetoencephalographic (MEG) response. The perceptual response was assessed by having the participant adjust a probe tone to cause maximal beating ("best-beats") with the perceived combination tone. The cortical response to the combination tone was evaluated in two ways: first by presenting a probe tone with a frequency of 460 Hz at the perceptual best-beats level, resulting in a 40-Hz response because of interaction with the combination tone at 500 Hz, and second by simultaneously presenting two f(1) and f(2) pairs that caused combination tones that would themselves beat at 40 Hz. The 2f(1)-f(2) DPOAE in the external auditory canal had a level of 2.6 (s.d. 12.1) dB SPL. The 40-Hz MEG response in the contralateral cortex had a magnitude of 0.39 (s.d. 0.1) nA m. The perceived level of the combination tone was 44.8 (s.d. 11.3) dB SPL. There were no significant correlations between these measurements. These results indicate that physiological responses to the 2f(1)-f(2) combination tone occur in the human auditory system all the way from the cochlea to the primary auditory cortex. The perceived magnitude of the combination tone is not determined by the measured physiological response at either the cochlea or the cortex.
Collapse
Affiliation(s)
- David W Purcell
- Rotman Research Institute at Baycrest, Toronto, Ontario, Canada.
| | | | | | | |
Collapse
|
40
|
Wile D, Balaban E. An auditory neural correlate suggests a mechanism underlying holistic pitch perception. PLoS One 2007; 2:e369. [PMID: 17426817 PMCID: PMC1838520 DOI: 10.1371/journal.pone.0000369] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2007] [Accepted: 03/08/2007] [Indexed: 11/18/2022] Open
Abstract
Current theories of auditory pitch perception propose that cochlear place (spectral) and activity timing pattern (temporal) information are somehow combined within the brain to produce holistic pitch percepts, yet the neural mechanisms for integrating these two kinds of information remain obscure. To examine this process in more detail, stimuli made up of three pure tones whose components are individually resolved by the peripheral auditory system, but that nonetheless elicit a holistic, "missing fundamental" pitch percept, were played to human listeners. A technique was used to separate neural timing activity related to individual components of the tone complexes from timing activity related to an emergent feature of the complex (the envelope), and the region of the tonotopic map where information could originate from was simultaneously restricted by masking noise. Pitch percepts were mirrored to a very high degree by a simple combination of component-related and envelope-related neural responses with similar timing that originate within higher-frequency regions of the tonotopic map where stimulus components interact. These results suggest a coding scheme for holistic pitches whereby limited regions of the tonotopic map (spectral places) carrying envelope- and component-related activity with similar timing patterns selectively provide a key source of neural pitch information. A similar mechanism of integration between local and emergent object properties may contribute to holistic percepts in a variety of sensory systems.
Collapse
Affiliation(s)
- Daryl Wile
- Behavioral Neurosciences Program, McGill University, Montreal, Canada
| | - Evan Balaban
- Behavioral Neurosciences Program, McGill University, Montreal, Canada
- Cognitive Neuroscience Sector, Scuola Internazionale Superiore di Studi Avanzati (SISSA), Trieste, Italy
- * To whom correspondence should be addressed. E-mail:
| |
Collapse
|
41
|
He W, Nuttall AL, Ren T. Two-tone distortion at different longitudinal locations on the basilar membrane. Hear Res 2007; 228:112-22. [PMID: 17353104 PMCID: PMC2041923 DOI: 10.1016/j.heares.2007.01.026] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/18/2006] [Revised: 01/25/2007] [Accepted: 01/30/2007] [Indexed: 11/15/2022]
Abstract
When listening to two tones at frequency f1 and f2 (f2>f1), one can hear pitches not only at f1 and f2 but also at distortion frequencies f2-f1, (n+1)f1-nf2, and (n+1)f2-nf1 (n=1,2,3...). Such two-tone distortion products (DPs) also can be measured in the ear canal using a sensitive microphone. These ear-generated sounds are called otoacoustic emissions (OAEs). In spite of the common applications of OAEs, the mechanisms by which these emissions travel out of the cochlea remain unclear. In a recent study, the basilar membrane (BM) vibration at 2f1-f2 was measured as a function of the longitudinal location, using a scanning laser interferometer. The data indicated a forward traveling wave and no measurable backward wave. However, this study had a relatively high noise floor and high stimulus intensity. In the current study, the noise floor of the BM measurement was significantly decreased by using reflective beads on the BM, and the vibration was measured at relatively low intensities at more than one longitudinal location. The results show that the DP phase at a basal location leads the phase at an apical location. The data indicate that the emission travels along the BM from base to apex as a forward traveling wave, and no backward traveling wave was detected under the current experimental conditions.
Collapse
Affiliation(s)
- Wenxuan He
- Oregon Hearing Research Center, Department of Otolaryngology and Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon 97239-3098
- Department of Otolaryngology of First Affiliated Hospital, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China 710061
| | - Alfred L. Nuttall
- Oregon Hearing Research Center, Department of Otolaryngology and Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon 97239-3098
- Kresge Hearing Research Institute, The University of Michigan, Ann Arbor, Michigan 48109
| | - Tianying Ren
- Oregon Hearing Research Center, Department of Otolaryngology and Head & Neck Surgery, Oregon Health & Science University, Portland, Oregon 97239-3098
- Department of Physiology, Medical School, Xi’an Jiaotong University, Xi’an, Shaanxi, P.R. China 710061
- * Corresponding author: T. Ren, Oregon Hearing Research Center, Department of Otolaryngology and Head & Neck Surgery, Oregon Health & Science University, 3181 SW Sam Jackson Park Road, NRC04, Portland, Oregon 97239-3098, United States. Tel.: +1 503 494 2938; Fax: +1 503 494 5656. E-mail address: (T. Ren)
| |
Collapse
|
42
|
Abstract
It is commonly accepted that the cochlea emits sound by a backward traveling wave along the cochlear partition. This belief is mainly based on an observation that the group delay of the otoacoustic emission measured in the ear canal is twice as long as the forward delay. In this study, the otoacoustic emission was measured in the gerbil under anesthesia not only in the ear canal but also at the stapes, eliminating measurement errors arising from unknown external- and middle-ear delays. The emission group delay measured at the stapes was compared with the group delay of basilar membrane vibration at the putative emission-generation site, the forward delay. The results show that the total intracochlear delay of the emission is equal to or smaller than the forward delay. For emissions with an f2/f1 ratio <1.2, the data indicate that the reverse propagation of the emission from its generation site to the stapes is much faster than a forward traveling wave to the f2 location. In addition, that the round-trip delays are smaller than the forward delay implies a basal shift of the emission generation site, likely explained by the basal shift of primary-tone response peaks with increasing intensity. However, for emissions with an f1 ≪ f2, the data cannot distinguish backward traveling waves from compression waves because of a very small f1 delay at the f2 site.
Collapse
Affiliation(s)
- Tianying Ren
- Oregon Hearing Research Center, Department of Otolaryngology and Head and Neck Surgery, Oregon Health and Science University, 3181 SW Sam Jackson Park Road, NRC04, Portland, OR 97239-3098, USA.
| | | | | | | |
Collapse
|
43
|
Wenxuan H, Tianying R. Backward Propagation of Otoacoustic Emissions. J Otol 2006. [DOI: 10.1016/s1672-2930(06)50007-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
44
|
|
45
|
Lopez-Poveda EA. Spectral processing by the peripheral auditory system: facts and models. INTERNATIONAL REVIEW OF NEUROBIOLOGY 2005; 70:7-48. [PMID: 16472630 DOI: 10.1016/s0074-7742(05)70001-5] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca 37007, Spain
| |
Collapse
|
46
|
Stoop R, Kern A. Two-tone suppression and combination tone generation as computations performed by the Hopf cochlea. PHYSICAL REVIEW LETTERS 2004; 93:268103. [PMID: 15698025 DOI: 10.1103/physrevlett.93.268103] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/12/2003] [Indexed: 05/24/2023]
Abstract
Recent evidence suggests that the compressive nonlinearity responsible for the extreme dynamic range of the mammalian cochlea is implemented in the form of Hopf amplifiers. Whereas Helmholtz's original concept of the cochlea was that of a frequency analyzer, Hopf amplifiers can be stimulated not only by one, but also by neighboring frequencies. To reduce the resulting computational overhead, the mammalian cochlea is aided by two-tone suppression. We show that the laws governing two-tone suppression and the generation of combination tones naturally emerge from the Hopf-cochlea concept. Thus the Hopf concept of the cochlea reproduces not only local properties like the correct frequency response, but additionally accounts for more complex hearing phenomena that may be related to auditory signal computation.
Collapse
Affiliation(s)
- R Stoop
- Institute of Neuroinformatics, University/ETH of Zürich, Winterthurerstrasse 190, 8057 Zürich, Switzerland
| | | |
Collapse
|
47
|
Slaven A, Lineton B, Thornton ARD. Properties of Volterra slices of otoacoustic emissions in normal-hearing humans obtained using maximum length sequences of clicks. Hear Res 2003; 179:113-25. [PMID: 12742244 DOI: 10.1016/s0378-5955(03)00101-1] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Abstract
Nonlinear temporal interaction components of otoacoustic emissions (OAEs) may be investigated by presenting a stream of clicks in maximum length sequences. This yields responses, termed here Volterra slices, which are related to the Volterra kernels of the system. The aim of this study was to obtain normative data on Volterra slices over a range of click rates and stimulus levels. OAEs were recorded in 12 normally hearing adult ears at six rates and four click levels. In addition to the first order kernel, six slices from the Volterra slices of orders 2-5 were extracted from the recordings. It was found that higher order kernel slices could be reliably measured in all 12 ears tested and that they have properties that differ from those of the conventional OAEs. These findings may facilitate the study of cochlear function in both normal and pathological ears.
Collapse
Affiliation(s)
- A Slaven
- MRC Institute of Hearing Research, Royal South Hants Hospital, Brinton's Terrace, Off St Mary's Road, Southampton SO14 0YG, UK
| | | | | |
Collapse
|
48
|
Zettner EM, Folsom RC. Transient emission suppression tuning curve attributes in relation to psychoacoustic threshold. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2003; 113:2031-2041. [PMID: 12703714 DOI: 10.1121/1.1560191] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Ipsilateral suppression characteristics of transiently evoked otoacoustic emissions (TEOAEs) are described in relation to psychoacoustic threshold at 4000 Hz and the presence or absence of spontaneous otoacoustic emissions in 41 adults with normal hearing. TEOAE amplitudes were measured in response to 4000-Hz tonebursts presented in linear blocks at 40 and 50 dB SPL while puretone suppressors were introduced at a variety of frequencies and levels ipsilateral to and simultaneously with the tonebursts. Suppressors close to the toneburst frequency were most effective in decreasing the amplitude of the TEOAEs, while those more remote in frequency required significantly greater intensity for a similar amount of suppression. Consequently, characteristic tuning curve shapes were obtained. Tuning-curve tip levels were closely associated with the level of the toneburst and tip frequencies occurred at or above the toneburst frequency. Tuning-curve widths (Q10), however, varied significantly across subjects with similar psychoacoustic thresholds in quiet determined by a two-alternative forced-choice method. The results suggest that a portion of that variability may be explained by the presence or absence of spontaneous otoacoustic emissions in an individual ear.
Collapse
Affiliation(s)
- Erika M Zettner
- Department of Speech and Hearing Sciences, University of Washington, JG-15, Seattle, Washington 98195, USA.
| | | |
Collapse
|
49
|
Lukashkin AN, Russell IJ. A second, low-frequency mode of vibration in the intact mammalian cochlea. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2003; 113:1544-1550. [PMID: 12656389 DOI: 10.1121/1.1535941] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
The mammalian cochlea is a structure comprising a number of components connected by elastic elements. A mechanical system of this kind is expected to have multiple normal modes of oscillation and associated resonances. The guinea pig cochlear mechanics was probed using distortion components generated in the cochlea close to the place of overlap between two tones presented simultaneously. Otoacoustic emissions at frequencies of the distortion components were recorded in the ear canal. The phase behavior of the emissions reveals the presence of a nonlinear resonance at a frequency about a half octave below that of the high-frequency primary tone. The location of the resonance is level dependent and the resonance shifts to lower frequencies with increasing stimulus intensity. This resonance is thought to be associated with the tectorial membrane. The resonance tends to minimize input to the cochlear receptor cells at frequencies below the high-frequency primary and increases the dynamic load to the stereocilia of the receptor cells at the primary frequency when the tectorial membrane and reticular lamina move in counterphase.
Collapse
Affiliation(s)
- Andrei N Lukashkin
- School of Biological Sciences, University of Sussex, Falmer, Brighton BN1 9QG, United Kingdom.
| | | |
Collapse
|
50
|
Alcántara JI, Moore BCJ. The relative role of beats and combination tones in determining the shapes of masking patterns: II. Hearing-impaired listeners. Hear Res 2002; 165:103-16. [PMID: 12031520 DOI: 10.1016/s0378-5955(02)00291-5] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Masking patterns were measured for hearing-impaired subjects with varying degrees of hearing loss. In one set of conditions, three subjects were tested using narrowband noise ('noise') and sinusoidal ('tone') maskers and narrowband noise signals. The maskers had centre frequencies of 0.25, 0.5, 1.0 and 4.0 kHz and levels of 60, 80 and 100 dB SPL. Masking patterns for both the noise and tone maskers showed irregularities ('dips'), especially for signal frequencies up to 500 Hz above the masker frequency. The irregularities occurred for all masker levels and for all subjects for at least one masker frequency and they occurred for a relatively constant range of masker-signal frequency separations, suggesting that they were the result of beat detection. In another set of conditions, masking patterns were measured using two subjects, for a 2.0-kHz tone masker with a level of 100 dB SPL and tone and noise signals. For the tone masker alone (baseline condition), the masking patterns again exhibited prominent dips above, and sometimes below, the masker frequency. The addition of a lowpass noise to the masker, intended to mask combination tones, had little effect for one subject. For the other subject, who had near-normal absolute thresholds at low frequencies, the noise elevated thresholds for masker-signal frequency separations between 500 and 1500 Hz. For this subject, an extra tone with a frequency equal to the masker-signal frequency separation, added in place of the lowpass noise, had a very similar effect to that produced by the lowpass noise, suggesting that he was detecting a simple difference tone in the baseline condition. The addition of a pair of high-frequency tones (MDI tones - intended to reduce the detectability of beats) to the masker elevated thresholds for signal frequencies from 1500 to 2500 Hz for one subject and from 1500 to 3500 Hz for another subject. The addition of lowpass noise and MDI tones to the masker produced masking patterns very similar to those observed when the MDI tones alone were added to the masker. Overall, the results suggest that the irregularities in the masking patterns were caused mainly by the detection of beats and not by the detection of combination tones.
Collapse
Affiliation(s)
- José I Alcántara
- Department of Experimental Psychology, University of Cambridge, Downing Street, CB2 3EB, UK
| | | |
Collapse
|