1
|
Friedrich B, Joost H, Fedtke T, Verhey JL. Temporal integration of infrasound at threshold. PLoS One 2023; 18:e0289216. [PMID: 37523364 PMCID: PMC10389702 DOI: 10.1371/journal.pone.0289216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 07/14/2023] [Indexed: 08/02/2023] Open
Abstract
Infrasounds are signals with frequencies below the classical audio-frequency range, i.e., below 20 Hz. Several previous studies have shown that infrasound is audible as well, provided that the sound level is high enough. Hence, the sound pressure levels at threshold are much higher than those in the classical audio-frequency range. The present study investigates how the duration and the shape of the temporal envelope affect thresholds of infrasound stimuli in quiet. Two envelope types were considered: one where the duration of the steady state was varied (plateau bursts) and one where the number of consecutive onset-offset bursts was varied (multiple bursts). Stimuli were presented monaurally to human listeners by means of a low-distortion sound reproduction system. For both envelope types, thresholds decrease with increasing duration, a phenomenon often referred to as temporal integration. At the same duration, thresholds for plateau-burst stimuli are typically lower than those for multiple-burst stimuli. The data are well described by a slightly modified version of a model that was previously developed to account for temporal integration in the classical audio-frequency range. The results suggest similar mechanisms underlying the detection of stimuli with frequencies in the infrasound and in the classical audio-frequency range. Since the model accounts for the effect of duration and, more generally, the shape of the envelope, it can be used to enhance the comparability of existing and future datasets of thresholds for infrasounds with different temporal stimulus parameters.
Collapse
Affiliation(s)
- Björn Friedrich
- Department of Experimental Audiology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| | - Holger Joost
- Physikalisch-Technische Bundesanstalt, Braunschweig, Germany
| | - Thomas Fedtke
- Physikalisch-Technische Bundesanstalt, Braunschweig, Germany
| | - Jesko L Verhey
- Department of Experimental Audiology, Otto von Guericke University Magdeburg, Magdeburg, Germany
| |
Collapse
|
2
|
Heil P, Friedrich B. How to define thresholds for level and interaural-level-difference discrimination: Insights from scedasticities and distributions. Hear Res 2023; 436:108837. [PMID: 37413706 DOI: 10.1016/j.heares.2023.108837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 05/31/2023] [Accepted: 06/19/2023] [Indexed: 07/08/2023]
Abstract
Sensitivity to changes in the stimulus level at one or at both ears and to changes in the interaural level difference (ILD) between the two ears has been studied widely. Several different definitions of threshold and, for one of them, two different ways of averaging single-listener thresholds have been used (i.e., arithmetically and geometrically), but it is unclear which definition and which way of averaging is most suitable. Here, we addressed this issue by examining which of the differently defined thresholds yielded the highest degree of homoscedasticity (homogeneity of the variance). We also examined how closely the differently defined thresholds followed the normal distribution. We measured thresholds from a large number of human listeners as a function of stimulus duration in six experimental conditions, using an adaptive two-alternative forced-choice paradigm. Thresholds defined as the logarithm of the ratio of the intensities or amplitudes of the target and the reference stimulus (i.e., as the difference in their levels or ILDs; the most commonly used definition) were clearly heteroscedastic. Log-transformation of these latter thresholds, as sometimes performed, did not result in homoscedasticity. Thresholds defined as the logarithm of the Weber fraction for stimulus intensity and thresholds defined as the logarithm of the Weber fraction for stimulus amplitude (the most rarely used definition) were consistent with homoscedasticity, but the latter were closer to the ideal case. Thresholds defined as the logarithm of the Weber fraction for stimulus amplitude also followed the normal distribution most closely. The discrimination thresholds should therefore be expressed as the logarithm of the Weber fraction for stimulus amplitude and be averaged arithmetically across listeners. Other implications are discussed, and the obtained differences between the thresholds in different conditions are compared to the literature.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| | - Björn Friedrich
- Department of Experimental Audiology, Otto von Guericke University, Magdeburg, Germany
| |
Collapse
|
3
|
Silva BG, Gonzaga D, Rocha CH, Gomes RF, Moreira RR, Bistafa SR, Samelli AG. Noise Exposure, Headsets, and Auditory and Nonauditory Symptoms in Call Center Operators. Am J Audiol 2022; 31:112-125. [PMID: 35050696 DOI: 10.1044/2021_aja-21-00088] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study evaluates the exposure of call center operators (CCOs) to occupational noise, its association with auditory and nonauditory symptoms, and the feasibility of monaural and binaural headsets. METHOD We measured the noise exposure sound pressure levels (SPLs) with the microphone-in-real-ear technique and administered a questionnaire on auditory/nonauditory symptoms and headset preference. RESULTS We assessed 79 CCOs with normal hearing. Overall, 98.7% of the participants reported at least one auditory symptom, and 88.6% reported at least one nonauditory symptom after using the headset. We found significant associations between the headset volume setting and the number of auditory and nonauditory symptoms and between sharp increases in sound level and tinnitus. The microphone-in-real-ear diffuse-field-related SPLs with monaural headsets (85.5 dBA) were significantly higher than those with binaural headsets (83.1 dBA). Binaural headsets were the preference of 84.8% of the subjects. The SPLs of the binaural headsets were significantly lower than those of the monaural headsets in the subjects who preferred the binaural headsets. CONCLUSIONS CCOs with normal hearing reported auditory and nonauditory symptoms, highlighting the need for attention and further investigation. The binaural headsets were preferable, as they were associated with a lower SPL and a higher call quality. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.18361463.
Collapse
Affiliation(s)
- Bárbara Gabriela Silva
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, School of Medicine (FMUSP), University of São Paulo, SP, Brazil
| | - Denise Gonzaga
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, School of Medicine (FMUSP), University of São Paulo, SP, Brazil
| | - Clayton Henrique Rocha
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, School of Medicine (FMUSP), University of São Paulo, SP, Brazil
| | - Raquel Fornaziero Gomes
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, School of Medicine (FMUSP), University of São Paulo, SP, Brazil
| | | | - Sylvio R. Bistafa
- Department of Mechanical Engineering, Polytechnic School, University of São Paulo, SP, Brazil
| | - Alessandra Giannella Samelli
- Department of Physical Therapy, Speech-Language-Hearing Sciences, and Occupational Therapy, School of Medicine (FMUSP), University of São Paulo, SP, Brazil
| |
Collapse
|
4
|
Alzaher M, Vannson N, Deguine O, Marx M, Barone P, Strelnikov K. Brain plasticity and hearing disorders. Rev Neurol (Paris) 2021; 177:1121-1132. [PMID: 34657730 DOI: 10.1016/j.neurol.2021.09.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 09/06/2021] [Accepted: 09/10/2021] [Indexed: 11/30/2022]
Abstract
Permanently changed sensory stimulation can modify functional connectivity patterns in the healthy brain and in pathology. In the pathology case, these adaptive modifications of the brain are referred to as compensation, and the subsequent configurations of functional connectivity are called compensatory plasticity. The variability and extent of auditory deficits due to the impairments in the hearing system determine the related brain reorganization and rehabilitation. In this review, we consider cross-modal and intra-modal brain plasticity related to bilateral and unilateral hearing loss and their restoration using cochlear implantation. Cross-modal brain plasticity may have both beneficial and detrimental effects on hearing disorders. It has a beneficial effect when it serves to improve a patient's adaptation to the visuo-auditory environment. However, the occupation of the auditory cortex by visual functions may be a negative factor for the restoration of hearing with cochlear implants. In what concerns intra-modal plasticity, the loss of interhemispheric asymmetry in asymmetric hearing loss is deleterious for the auditory spatial localization. Research on brain plasticity in hearing disorders can advance our understanding of brain plasticity and improve the rehabilitation of the patients using prognostic, evidence-based approaches from cognitive neuroscience combined with post-rehabilitation objective biomarkers of this plasticity utilizing neuroimaging.
Collapse
Affiliation(s)
- M Alzaher
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - N Vannson
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France
| | - O Deguine
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - M Marx
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France; Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| | - P Barone
- Université de Toulouse, UPS, centre de recherche cerveau et cognition, Toulouse, France; CNRS, CerCo, France.
| | - K Strelnikov
- Faculté de médecine de Purpan, CHU Toulouse, université de Toulouse 3, France
| |
Collapse
|
5
|
Heil P, Mohamed ESI, Matysiak A. Towards a unifying basis of auditory thresholds: Thresholds for multicomponent stimuli. Hear Res 2021; 410:108349. [PMID: 34530356 DOI: 10.1016/j.heares.2021.108349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 08/23/2021] [Accepted: 08/30/2021] [Indexed: 11/25/2022]
Abstract
Sounds consisting of multiple simultaneous or consecutive components can be detected by listeners when the stimulus levels of the components are lower than those needed to detect the individual components alone. The mechanisms underlying such spectral, spectrotemporal, temporal, or across-ear integration are not completely understood. Here, we report threshold measurements from human subjects for multicomponent stimuli (tone complexes, tone sequences, diotic or dichotic tones) and for their individual sinusoidal components in quiet. We examine whether the data are compatible with the detection model developed by Heil, Matysiak, and Neubauer (HMN model) to account for temporal integration (Heil et al. 2017), and we compare its performance to that of the statistical summation model (Green 1958), the model commonly used to account for spectral and spectrotemporal integration. In addition, we compare the performance of both models with respect to previously published thresholds for sequences of identical tones and for diotic tones. The HMN model is similar to the statistical summation model but is based on the assumption that the decision variable is a number of sensory events generated by the components via independent Poisson point processes. The rate of events is low without stimulation and increases with stimulation. The increase is proportional to the time-varying amplitude envelope of the bandpass-filtered component(s) raised to an exponent of 3. For an ideal observer, the decision variable is the sum of the events from all channels carrying information, for as long as they carry information. We find that the HMN model provides a better account of the thresholds for multicomponent stimuli than the statistical summation model, and it offers a unifying account of spectral, spectrotemporal, temporal, and across-ear integration at threshold.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg 39118, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| | - Esraa S I Mohamed
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg 39118, Germany
| | - Artur Matysiak
- Research Group Comparative Neuroscience, Leibniz Institute for Neurobiology, Magdeburg, Germany
| |
Collapse
|
6
|
Mackey CA, McCrate J, MacDonald KS, Feller J, Liberman L, Liberman MC, Hackett TA, Ramachandran R. Correlations between cochlear pathophysiology and behavioral measures of temporal and spatial processing in noise exposed macaques. Hear Res 2021; 401:108156. [PMID: 33373804 PMCID: PMC8487072 DOI: 10.1016/j.heares.2020.108156] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2020] [Revised: 12/10/2020] [Accepted: 12/14/2020] [Indexed: 12/23/2022]
Abstract
Noise-induced hearing loss (NIHL) is known to have significant consequences for temporal, spectral, and spatial resolution. However, much remains to be discovered about their underlying pathophysiology. This report extends the recent development of a nonhuman primate model of NIHL to explore its consequences for hearing in noisy environments, and its correlations with the underlying cochlear pathology. Ten macaques (seven with normal-hearing, three with NIHL) were used in studies of masked tone detection in which the temporal or spatial properties of the masker were varied to assess metrics of temporal and spatial processing. Normal-hearing (NH) macaques showed lower tone detection thresholds for sinusoidally amplitude modulated (SAM) broadband noise maskers relative to unmodulated maskers (modulation masking release, MMR). Tone detection thresholds were lowest at low noise modulation frequencies, and increased as modulation frequency increased, until they matched threshold in unmodulated noise. NH macaques also showed lower tone detection thresholds for spatially separated tone and noise relative to co-localized tone and noise (spatial release from masking, SRM). Noise exposure caused permanent threshold shifts that were verified behaviorally and audiologically. In hearing-impaired (HI) macaques, MMR was reduced at tone frequencies above that of the noise exposure. HI macaques also showed degraded SRM, with no SRM observed across all tested tone frequencies. Deficits in MMR correlated with audiometric threshold changes, outer hair cell loss, and synapse loss, while the differences in SRM did not correlate with audiometric changes, or any measure of cochlear pathophysiology. This difference in anatomical-behavioral correlations suggests that while many behavioral deficits may arise from cochlear pathology, only some are predictable from the frequency place of damage in the cochlea.
Collapse
Affiliation(s)
- Chase A Mackey
- Vanderbilt Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37212, United States.
| | - Jennifer McCrate
- Interdisciplinary Program in Neuroscience for Undergraduates, Vanderbilt University, Nashville, TN 37240, United States.
| | - Kaitlyn S MacDonald
- Vanderbilt Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Jessica Feller
- Vanderbilt Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37212, United States.
| | - Leslie Liberman
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary & Harvard Medical Center, Boston, MA 02114, United States.
| | - M Charles Liberman
- Eaton Peabody Laboratories, Massachusetts Eye and Ear Infirmary & Harvard Medical Center, Boston, MA 02114, United States.
| | - Troy A Hackett
- Vanderbilt Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Ramnarayan Ramachandran
- Vanderbilt Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| |
Collapse
|
7
|
Heil P. Comparing and modeling absolute auditory thresholds in an alternative-forced-choice and a yes-no procedure. Hear Res 2021; 403:108164. [PMID: 33453643 DOI: 10.1016/j.heares.2020.108164] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/05/2020] [Revised: 12/08/2020] [Accepted: 12/30/2020] [Indexed: 01/11/2023]
Abstract
Detecting sounds in quiet is arguably the simplest task performed by an auditory system, but the underlying mechanisms are still a matter of debate. Threshold stimulus levels depend not only on the physical properties of the sounds to be detected but also on the experimental procedure used to measure them. Here, thresholds of human subjects were measured for sounds consisting of different numbers of bursts using both an alternative-forced-choice and a yes-no procedure in the same experimental sessions. Thresholds measured with the yes-no procedure were typically higher than thresholds measured with the alternative-forced choice procedure. The difference between the two thresholds decreased as stimulus duration increased. It also varied between subjects and varied with the probability of false alarms in the yes-no procedure. It is shown that a previously proposed model of detection (Heil et al., Hear Res 2017) can account for these findings better than other models. It can also account for the shapes of the psychometric functions. The model is consistent with basic concepts of signal detection theory but is based on a decision variable that follows Poisson statistics. It also differs from other models of detection with respect to the transformation of the stimulus into the decision variable. The findings in this study further support the model.
Collapse
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning, Leibniz Institute for Neurobiology, Magdeburg, Germany; Center for Behavioral Brain Sciences, Magdeburg, Germany.
| |
Collapse
|
8
|
Balkenhol T, Wallhäusser-Franke E, Rotter N, Servais JJ. Cochlear Implant and Hearing Aid: Objective Measures of Binaural Benefit. Front Neurosci 2020; 14:586119. [PMID: 33381008 PMCID: PMC7768047 DOI: 10.3389/fnins.2020.586119] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 10/15/2020] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants (CI) improve hearing for the severely hearing impaired. With an extension of implantation candidacy, today many CI listeners use a hearing aid on their contralateral ear, referred to as bimodal listening. It is uncertain, however, whether the brains of bimodal listeners can combine the electrical and acoustical sound information and how much CI experience is needed to achieve an improved performance with bimodal listening. Patients with bilateral sensorineural hearing loss undergoing implant surgery were tested in their ability to understand speech in quiet and in noise, before and again 3 and 6 months after provision of a CI. Results of these bimodal listeners were compared to age-matched, normal hearing controls (NH). The benefit of adding a contralateral hearing aid was calculated in terms of head shadow, binaural summation, binaural squelch, and spatial release from masking from the results of a sentence recognition test. Beyond that, bimodal benefit was estimated from the difference in amplitudes and latencies of the N1, P2, and N2 potentials of the brains' auditory evoked response (AEP) toward speech. Data of fifteen participants contributed to the results. CI provision resulted in significant improvement of speech recognition with the CI ear, and in taking advantage of the head shadow effect for understanding speech in noise. Some amount of binaural processing was suggested by a positive binaural summation effect 6 month post-implantation that correlated significantly with symmetry of pure tone thresholds. Moreover, a significant negative correlation existed between binaural summation and latency of the P2 potential. With CI experience, morphology of the N1 and P2 potentials in the AEP response approximated that of NH, whereas, N2 remained different. Significant AEP differences between monaural and binaural processing were shown for NH and for bimodal listeners 6 month post-implantation. Although the grand-averaged difference in N1 amplitude between monaural and binaural listening was similar for NH and the bimodal group, source localization showed group-dependent differences in auditory and speech-relevant cortex, suggesting different processing in the bimodal listeners.
Collapse
Affiliation(s)
- Tobias Balkenhol
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Elisabeth Wallhäusser-Franke
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Jérôme J Servais
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
9
|
Burton JA, Mackey CA, MacDonald KS, Hackett TA, Ramachandran R. Changes in audiometric threshold and frequency selectivity correlate with cochlear histopathology in macaque monkeys with permanent noise-induced hearing loss. Hear Res 2020; 398:108082. [PMID: 33045479 PMCID: PMC7769151 DOI: 10.1016/j.heares.2020.108082] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 09/12/2020] [Accepted: 09/20/2020] [Indexed: 02/07/2023]
Abstract
Exposure to loud noise causes damage to the inner ear, including but not limited to outer and inner hair cells (OHCs and IHCs) and IHC ribbon synapses. This cochlear damage impairs auditory processing and increases audiometric thresholds (noise-induced hearing loss, NIHL). However, the exact relationship between the perceptual consequences of NIHL and its underlying cochlear pathology are poorly understood. This study used a nonhuman primate model of NIHL to relate changes in frequency selectivity and audiometric thresholds to indices of cochlear histopathology. Three macaques (one Macaca mulatta and two Macaca radiata) were trained to detect tones in quiet and in noises that were spectrally notched around the tone frequency. Audiograms were derived from tone thresholds in quiet; perceptual auditory filters were derived from tone thresholds in notched-noise maskers using the rounded-exponential fit. Data were obtained before and after a four-hour exposure to a 50-Hz noise centered at 2 kHz at 141 or 146 dB SPL. Noise exposure caused permanent audiometric threshold shifts and broadening of auditory filters at and above 2 kHz, with greater changes observed for the 146-dB-exposed monkeys. The normalized bandwidth of the perceptual auditory filters was strongly correlated with audiometric threshold at each tone frequency. While changes in audiometric threshold and perceptual auditory filter widths were primarily determined by the extent of OHC survival, additional variability was explained by including interactions among OHC, IHC, and ribbon synapse survival. This is the first study to provide within-subject comparisons of auditory filter bandwidths in an animal model of NIHL and correlate these NIHL-related perceptual changes with cochlear histopathology. These results expand the foundations for ongoing investigations of the neural correlates of NIHL-related perceptual changes.
Collapse
Affiliation(s)
- Jane A Burton
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37235, United States.
| | - Chase A Mackey
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN 37235, United States.
| | - Kaitlyn S MacDonald
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Troy A Hackett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| | - Ramnarayan Ramachandran
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, United States.
| |
Collapse
|
10
|
Legris E, Galvin J, Roux S, Aoustin JM, Bakhos D. Development of cortical auditory responses to speech in noise in unilaterally deaf adults following cochlear implantation. PLoS One 2020; 15:e0239487. [PMID: 32976532 PMCID: PMC7518575 DOI: 10.1371/journal.pone.0239487] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 09/08/2020] [Indexed: 12/04/2022] Open
Abstract
BACKGROUND For patients with single-sided deafness (SSD), restoration of binaural function via cochlear implant (CI) has been shown to improve speech understanding in noise. The objective of this study was to investigate changes in behavioral performance and cortical auditory responses following cochlear implantation. DESIGN Prospective longitudinal study. SETTING Tertiary referral center. METHODS Six adults with SSD were tested before and 12 months post-activation of the CI. Six normal hearing (NH) participants served as experimental controls. Speech understanding in noise was evaluated for various spatial conditions. Cortical auditory evoked potentials were recorded with /ba/ stimuli in quiet and in noise. Global field power and responses at Cz were analyzed. RESULTS Speech understanding in noise significantly improved with the CI when speech was presented to the CI ear and noise to the normal ear (p<0.05), but remained poorer than that of NH controls (p<0.05). N1 peak amplitude measure in noise significantly increased after CI activation (p<0.05), but remained lower than that of NH controls (p<0.05) at 12 months. After 12 months of CI experience, cortical responses in noise became more comparable between groups. CONCLUSION Binaural restoration in SSD patients via cochlear implantation improved speech performance noise and cortical responses. While behavioral performance and cortical auditory responses improved, SSD-CI outcomes remained poorer than that of NH controls in most cases, suggesting only partial restoration of binaural hearing.
Collapse
Affiliation(s)
- Elsa Legris
- UMR1253, iBrain, Université de Tours, INSERM, Tours, France
- Ear Nose and Throat Department, Tours, France
| | - John Galvin
- House Ear Institute, Los Angeles, CA, United States of America
| | - Sylvie Roux
- UMR1253, iBrain, Université de Tours, INSERM, Tours, France
| | | | - David Bakhos
- UMR1253, iBrain, Université de Tours, INSERM, Tours, France
- Ear Nose and Throat Department, Tours, France
| |
Collapse
|
11
|
Heil P, Matysiak A. Absolute auditory threshold: testing the absolute. Eur J Neurosci 2020; 51:1224-1233. [DOI: 10.1111/ejn.13765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Revised: 10/05/2017] [Accepted: 10/25/2017] [Indexed: 11/30/2022]
Affiliation(s)
- Peter Heil
- Department of Systems Physiology of Learning Leibniz Institute for Neurobiology Magdeburg 39118 Germany
- Center for Behavioral Brain Sciences Magdeburg Germany
| | - Artur Matysiak
- Special Lab of Non‐invasive Brain Imaging Leibniz Institute for Neurobiology Magdeburg Germany
| |
Collapse
|
12
|
Binaural summation of amplitude modulation involves weak interaural suppression. Sci Rep 2020; 10:3560. [PMID: 32103139 PMCID: PMC7044261 DOI: 10.1038/s41598-020-60602-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 02/10/2020] [Indexed: 11/29/2022] Open
Abstract
The brain combines sounds from the two ears, but what is the algorithm used to achieve this summation of signals? Here we combine psychophysical amplitude modulation discrimination and steady-state electroencephalography (EEG) data to investigate the architecture of binaural combination for amplitude-modulated tones. Discrimination thresholds followed a ‘dipper’ shaped function of pedestal modulation depth, and were consistently lower for binaural than monaural presentation of modulated tones. The EEG responses were greater for binaural than monaural presentation of modulated tones, and when a masker was presented to one ear, it produced only weak suppression of the response to a signal presented to the other ear. Both data sets were well-fit by a computational model originally derived for visual signal combination, but with suppression between the two channels (ears) being much weaker than in binocular vision. We suggest that the distinct ecological constraints on vision and hearing can explain this difference, if it is assumed that the brain avoids over-representing sensory signals originating from a single object. These findings position our understanding of binaural summation in a broader context of work on sensory signal combination in the brain, and delineate the similarities and differences between vision and hearing.
Collapse
|
13
|
A probabilistic Poisson-based model accounts for an extensive set of absolute auditory threshold measurements. Hear Res 2017; 353:135-161. [DOI: 10.1016/j.heares.2017.06.011] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 06/19/2017] [Accepted: 06/25/2017] [Indexed: 01/11/2023]
|
14
|
Speech-in-noise perception in unilateral hearing loss: Relation to pure-tone thresholds and brainstem plasticity. Neuropsychologia 2017. [PMID: 28623107 DOI: 10.1016/j.neuropsychologia.2017.06.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
We investigated speech recognition in noise in subjects with mild to profound levels of unilateral hearing loss. Thirty-five adults were evaluated using an adaptive signal-to-noise ratio (SNR50) sentence recognition threshold test in three spatial configurations. The results revealed a significant correlation between pure-tone average audiometric thresholds in the poorer ear and SNR thresholds in the two conditions where speech and noise were spatially separated: dichotic - with speech presented to the poorer ear and reverse dichotic - with speech presented to the better ear. This first result suggested that standard pure-tone air-conduction thresholds can be a reliable predictor of speech recognition in noise for binaural conditions. However, a subgroup of 14 subjects was found to have poorer-than-expected speech recognition scores, especially in the reverse dichotic listening condition. In this subgroup 9 subjects had been diagnosed with vestibular schwannoma at stage III or IV likely affecting the lower brainstem function. These subjects showed SNR thresholds in the reverse dichotic condition on average 4dB poorer (higher) than for the other 21 normally-performing subjects. For the 7 of 9 subjects whose vestibular schwannoma was removed, the deficit was no longer apparent on average 5 months following the surgical procedure. These results suggest that following unilateral hearing loss the capacity to use monaural spectral information is supported by the lower brainstem.
Collapse
|
15
|
Abstract
Models are valuable tools to assess how deeply we understand complex systems: only if we are able to replicate the output of a system based on the function of its subcomponents can we assume that we have probably grasped its principles of operation. On the other hand, discrepancies between model results and measurements reveal gaps in our current knowledge, which can in turn be targeted by matched experiments. Models of the auditory periphery have improved greatly during the last decades, and account for many phenomena observed in experiments. While the cochlea is only partly accessible in experiments, models can extrapolate its behavior without gap from base to apex and with arbitrary input signals. With models we can for example evaluate speech coding with large speech databases, which is not possible experimentally, and models have been tuned to replicate features of the human hearing organ, for which practically no invasive electrophysiological measurements are available. Auditory models have become instrumental in evaluating models of neuronal sound processing in the auditory brainstem and even at higher levels, where they are used to provide realistic input, and finally, models can be used to illustrate how such a complicated system as the inner ear works by visualizing its responses. The big advantage there is that intermediate steps in various domains (mechanical, electrical, and chemical) are available, such that a consistent picture of the evolvement of its output can be drawn. However, it must be kept in mind that no model is able to replicate all physiological characteristics (yet) and therefore it is critical to choose the most appropriate model—or models—for every research question. To facilitate this task, this paper not only reviews three recent auditory models, it also introduces a framework that allows researchers to easily switch between models. It also provides uniform evaluation and visualization scripts, which allow for direct comparisons between models.
Collapse
|
16
|
Mertens G, Kleine Punte A, De Bodt M, Van de Heyning P. Binaural Auditory Outcomes in Patients with Postlingual Profound Unilateral Hearing Loss: 3 Years after Cochlear Implantation. Audiol Neurootol 2015; 20 Suppl 1:67-72. [DOI: 10.1159/000380751] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
Abstract
The value of cochlear implants (CI) in patients with profound unilateral hearing loss (UHL) and tinnitus has recently been investigated. The authors previously demonstrated the feasibility of CI in a 12- month outcome study in a prospective UHL cohort. The aim of this study was to investigate the binaural auditory outcomes in this cohort 36 months after CI surgery. The 36-month outcome was evaluated in 22 CI users with postlingual UHL and severe tinnitus. Twelve subjects had contralateral normal hearing (single-sided deafness - SSD group) and 10 subjects had a contralateral, mild to moderate hearing loss and used a hearing aid (asymmetric hearing loss - AHL group). Speech perception in noise was assessed in two listening conditions: the CIoff and the CIon condition. The binaural summation effect (S₀N₀), binaural squelch effect (S₀NCI) and the combined head shadow effect (SCIN₀) were investigated. Subjective benefit in daily life was assessed by means of the Speech, Spatial and Qualities of Hearing Scale (SSQ). At 36 months, a significant binaural summation effect was observed for the study cohort (2.00, SD 3.82 dB; p < 0.01) and for the AHL subgroup (3.34, SD 5.31 dB; p < 0.05). This binaural effect was not significant 12 months after CI surgery. A binaural squelch effect was significant for the AHL subgroup at 12 months (2.00, SD 4.38 dB; p < 0.05). A significant combined head shadow and squelch effect was also noted in the spatial configuration SCIN₀ for the study cohort (4.00, SD 5.89 dB; p < 0.01) and for the AHL subgroup (5.67, SD 6.66 dB; p < 0.05). The SSQ data show that the perceived benefit in daily life after CI surgery remains stable up to 36 months at CIon. CI can significantly improve speech perception in noise in patients with UHL. The positive effects of CIon speech perception in noise increase over time up to 36 months after CI surgery. Improved subjective benefit in daily life was also shown to be sustained in these patients.
Collapse
|
17
|
Heil P, Peterson AJ. Basic response properties of auditory nerve fibers: a review. Cell Tissue Res 2015; 361:129-58. [PMID: 25920587 DOI: 10.1007/s00441-015-2177-9] [Citation(s) in RCA: 68] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2014] [Accepted: 03/19/2015] [Indexed: 01/26/2023]
Abstract
All acoustic information from the periphery is encoded in the timing and rates of spikes in the population of spiral ganglion neurons projecting to the central auditory system. Considerable progress has been made in characterizing the physiological properties of type-I and type-II primary auditory afferents and understanding the basic properties of type-I afferents in response to sounds. Here, we review some of these properties, with emphasis placed on issues such as the stochastic nature of spike timing during spontaneous and driven activity, frequency tuning curves, spike-rate-versus-level functions, dynamic-range and spike-rate adaptation, and phase locking to stimulus fine structure and temporal envelope. We also review effects of acoustic trauma on some of these response properties.
Collapse
Affiliation(s)
- Peter Heil
- Leibniz Institute for Neurobiology, Brenneckestrasse 6, 39118, Magdeburg, Germany,
| | | |
Collapse
|