1
|
Lutfi RA, Pastore T, Rodriguez B, Yost WA, Lee J. Molecular analysis of individual differences in talker search at the cocktail-party. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1804. [PMID: 36182280 PMCID: PMC9507302 DOI: 10.1121/10.0014116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Revised: 08/22/2022] [Accepted: 08/29/2022] [Indexed: 06/16/2023]
Abstract
A molecular (trial-by-trial) analysis of data from a cocktail-party, target-talker search task was used to test two general classes of explanations accounting for individual differences in listener performance: cue weighting models for which errors are tied to the speech features talkers have in common with the target and internal noise models for which errors are largely independent of these features. The speech of eight different talkers was played simultaneously over eight different loudspeakers surrounding the listener. The locations of the eight talkers varied at random from trial to trial. The listener's task was to identify the location of a target talker with which they had previously been familiarized. An analysis of the response counts to individual talkers showed predominant confusion with one talker sharing the same fundamental frequency and timbre as the target and, secondarily, other talkers sharing the same timbre. The confusions occurred for a roughly constant 31% of all of the trials for all of the listeners. The remaining errors were uniformly distributed across the remaining talkers and responsible for the large individual differences in performances observed. The results are consistent with a model in which largely stimulus-independent factors (internal noise) are responsible for the wide variation in performance across listeners.
Collapse
Affiliation(s)
- Robert A Lutfi
- Auditory Behavioral Research Laboratory, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Torben Pastore
- Spatial Hearing Laboratory, Department of Speech and Hearing, Arizona State University, Tempe, Arizona 85281, USA
| | - Briana Rodriguez
- Auditory Behavioral Research Laboratory, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - William A Yost
- Spatial Hearing Laboratory, Department of Speech and Hearing, Arizona State University, Tempe, Arizona 85281, USA
| | - Jungmee Lee
- Auditory Behavioral Research Laboratory, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| |
Collapse
|
2
|
Pacheco D, Rajagopal N, Prieve BA, Nangia S. Joint Profile Characteristics of Long-Latency Transient Evoked and Distortion Otoacoustic Emissions. Am J Audiol 2022; 31:684-697. [PMID: 35862753 DOI: 10.1044/2022_aja-21-00182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE In clinical practice, otoacoustic emissions (OAEs) are interpreted as either "present" or "absent." However, OAEs have the potential to inform about etiology and severity of hearing loss if analyzed in other dimensions. A proposed method uses the nonlinear component of the distortion product OAEs together with stimulus frequency OAEs to construct a joint reflection-distortion profile. The objective of the current study is to determine if joint reflection-distortion profiles can be created using long-latency (LL) components of transient evoked OAEs (TEOAEs) as the reflection-type emission. METHOD LL TEOAEs and the nonlinear distortion OAEs were measured from adult ears. Individual input-output (I/O) functions were created, and OAE level was normalized by dividing by the stimulus level yielding individual gain functions. Peak strength, compression threshold, and OAE level at compression threshold were derived from individual gain functions to create joint reflection-distortion profiles. RESULTS TEOAEs with a poststimulus window starting at 6 ms had I/O functions with compression characteristics similar to LL TEOAE components. The model fit the LL gain functions, which had R 2 > .93, significantly better than the nonlinear distortion OAE gain functions, which had R 2 = .596-.99. Interquartile ranges for joint reflection-distortion profiles were larger for compression threshold and OAE level at compression threshold but smaller for peak strength than those previously published. CONCLUSIONS The gain function fits LL TEOAEs well. Joint reflection-distortion profiles are a promising method that could enhance diagnosis of hearing loss, and use of the LL TEOAE in the profile for peak strength may be important because of narrow interquartile ranges. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.20323593.
Collapse
Affiliation(s)
- Devon Pacheco
- Department of Communication Sciences and Disorders, Syracuse University, NY
| | - Nandhini Rajagopal
- Department of Biomedical and Chemical Engineering, Syracuse University, NY
| | - Beth A Prieve
- Department of Communication Sciences and Disorders, Syracuse University, NY
| | - Shikha Nangia
- Department of Biomedical and Chemical Engineering, Syracuse University, NY
| |
Collapse
|
3
|
Wen H, Meaud J. Link between stimulus otoacoustic emissions fine structure peaks and standing wave resonances in a cochlear model. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:1875. [PMID: 35364913 PMCID: PMC8934193 DOI: 10.1121/10.0009839] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/13/2021] [Revised: 03/03/2022] [Accepted: 03/04/2022] [Indexed: 06/14/2023]
Abstract
In response to an external stimulus, the cochlea emits sounds, called stimulus frequency otoacoustic emissions (SFOAEs), at the stimulus frequency. In this article, a three-dimensional computational model of the gerbil cochlea is used to simulate SFOAEs and clarify their generation mechanisms and characteristics. This model includes electromechanical feedback from outer hair cells (OHCs) and cochlear roughness due to spatially random inhomogeneities in the OHC properties. As in the experiments, SFOAE simulations are characterized by a quasiperiodic fine structure and a fast varying phase. Increasing the sound pressure level broadens the peaks and decreases the phase-gradient delay of SFOAEs. A state-space formulation of the model provides a theoretical framework to analyze the link between the fine structure and global modes of the cochlea, which arise as a result of standing wave resonances. The SFOAE fine structure peaks correspond to weakly damped resonant modes because they are observed at the frequencies of nearly unstable modes of the model. Variations of the model parameters that affect the reflection mechanism show that the magnitude and sharpness of the tuning of these peaks are correlated with the modal damping ratio of the nearly unstable modes. The analysis of the model predictions demonstrates that SFOAEs originate from the peak of the traveling wave.
Collapse
Affiliation(s)
- Haiqi Wen
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 771 Ferst Drive, Atlanta, Georgia 30332, USA
| | - Julien Meaud
- George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, 771 Ferst Drive, Atlanta, Georgia 30332, USA
| |
Collapse
|
4
|
Gong Q, Liu Y, Xu R, Liang D, Peng Z, Yang H. Objective Assessment System for Hearing Prediction Based on Stimulus-Frequency Otoacoustic Emissions. Trends Hear 2021; 25:23312165211059628. [PMID: 34817273 PMCID: PMC8738859 DOI: 10.1177/23312165211059628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Stimulus-frequency otoacoustic emissions (SFOAEs) can be useful tools for assessing cochlear function noninvasively. However, there is a lack of reports describing their utility in predicting hearing capabilities. Data for model training were collected from 245 and 839 ears with normal hearing and sensorineural hearing loss, respectively. Based on SFOAEs, this study developed an objective assessment system consisting of three mutually independent modules, with the routine test module and the fast test module used for threshold prediction and the hearing screening module for identifying hearing loss. Results evaluated via cross-validation show that the routine test module and the fast test module predict hearing thresholds with similar performance from 0.5 to 8 kHz, with mean absolute errors of 7.06–11.61 dB for the routine module and of 7.40–12.60 dB for the fast module. However, the fast module involves less test time than is needed in the routine module. The hearing screening module identifies hearing status with a large area under the receiver operating characteristic curve (0.912–0.985), high accuracy (88.4–95.9%), and low false negative rate (2.9–7.0%) at 0.5–8 kHz. The three modules are further validated on unknown data, and the results are similar to those obtained through cross-validation, indicating these modules can be well generalized to new data. Both the routine module and fast module are potential tools for predicting hearing thresholds. However, their prediction performance in ears with hearing loss requires further improvement to facilitate their clinical utility. The hearing screening module shows promise as a clinical tool for identifying hearing loss.
Collapse
Affiliation(s)
- Qin Gong
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China.,School of Medicine, Shanghai University, Shanghai, China
| | - Yin Liu
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Runyi Xu
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Dong Liang
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Zewen Peng
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| | - Honghao Yang
- Department of Biomedical Engineering, 12442Tsinghua University, Beijing, China
| |
Collapse
|
5
|
Jennings SG. The role of the medial olivocochlear reflex in psychophysical masking and intensity resolution in humans: a review. J Neurophysiol 2021; 125:2279-2308. [PMID: 33909513 PMCID: PMC8285664 DOI: 10.1152/jn.00672.2020] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Revised: 03/16/2021] [Accepted: 04/02/2021] [Indexed: 02/01/2023] Open
Abstract
This review addresses the putative role of the medial olivocochlear (MOC) reflex in psychophysical masking and intensity resolution in humans. A framework for interpreting psychophysical results in terms of the expected influence of the MOC reflex is introduced. This framework is used to review the effects of a precursor or contralateral acoustic stimulation on 1) simultaneous masking of brief tones, 2) behavioral estimates of cochlear gain and frequency resolution in forward masking, 3) the buildup and decay of forward masking, and 4) measures of intensity resolution. Support, or lack thereof, for a role of the MOC reflex in psychophysical perception is discussed in terms of studies on estimates of MOC strength from otoacoustic emissions and the effects of resection of the olivocochlear bundle in patients with vestibular neurectomy. Novel, innovative approaches are needed to resolve the dissatisfying conclusion that current results are unable to definitively confirm or refute the role of the MOC reflex in masking and intensity resolution.
Collapse
Affiliation(s)
- Skyler G Jennings
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City, Utah
| |
Collapse
|
6
|
Alenzi H, Lineton B. Transient otoacoustic emissions and audiogram fine structure in the extended high-frequency region. Int J Audiol 2021; 60:985-994. [PMID: 33779459 DOI: 10.1080/14992027.2021.1899313] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE Previous studies at conventional audiometric frequencies found associations between the ripple depth seen in audiogram fine structure (AFS) and amplitudes of both transient evoked otoacoustic emissions (TEOAEs) and overall hearing threshold levels (HTLs). These associations are explained by the cochlear mechanical theory of multiple coherent reflections of the travelling wave apically by reflections sites on the basilar membrane and basally by the stapes. DESIGN The aim was to investigate whether a similar relationship is seen in the extended high-frequency (EHF) range from 8-16 kHz. Measurements from 8-16 kHz were obtained in normal-hearing subjects comprising EHF HTLs, EHF TEOAEs using a double evoked paradigm, and Bekesy audiometry to assess AFS ripple depth and spectral periodicity. STUDY SAMPLE Twenty eight normal-hearing subjects participated. RESULTS Results showed no significant correlation between AFS ripple depth and either frequency-averaged EHF HTLs or EHF TEOAE amplitudes. The amplitude of AFS ripple depth was also lower than that seen in the conventional frequency region and spectral periodicity in the ripple more difficult to discern. CONCLUSION The results suggest a weaker interference pattern between forward and reverse cochlear travelling waves in the most basal region compared to more apical regions, or a difference in cochlear mechanical properties.
Collapse
Affiliation(s)
- Hind Alenzi
- Institute of Sound and Vibration Research, University of Southampton, Southampton, UK.,Department of Rehabilitation Sciences, College of Applied Medical Sciences, King Saud University, Riyadh, Kingdom of Saudi Arabia
| | - Ben Lineton
- Institute of Sound and Vibration Research, University of Southampton, Southampton, UK
| |
Collapse
|
7
|
Lutfi RA, Rodriguez B, Lee J. The Listener Effect in Multitalker Speech Segregation and Talker Identification. Trends Hear 2021; 25:23312165211051886. [PMID: 34693853 PMCID: PMC8544763 DOI: 10.1177/23312165211051886] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 09/20/2021] [Indexed: 12/04/2022] Open
Abstract
Over six decades ago, Cherry (1953) drew attention to what he called the "cocktail-party problem"; the challenge of segregating the speech of one talker from others speaking at the same time. The problem has been actively researched ever since but for all this time one observation has eluded explanation. It is the wide variation in performance of individual listeners. That variation was replicated here for four major experimental factors known to impact performance: differences in task (talker segregation vs. identification), differences in the voice features of talkers (pitch vs. location), differences in the voice similarity and uncertainty of talkers (informational masking), and the presence or absence of linguistic cues. The effect of these factors on the segregation of naturally spoken sentences and synthesized vowels was largely eliminated in psychometric functions relating the performance of individual listeners to that of an ideal observer, d'ideal. The effect of listeners remained as differences in the slopes of the functions (fixed effect) with little within-listener variability in the estimates of slope (random effect). The results make a case for considering the listener a factor in multitalker segregation and identification equal in status to any major experimental variable.
Collapse
Affiliation(s)
- Robert A. Lutfi
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Briana Rodriguez
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| | - Jungmee Lee
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida
| |
Collapse
|
8
|
Lutfi RA, Rodriguez B, Lee J, Pastore T. A test of model classes accounting for individual differences in the cocktail-party effect. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:4014. [PMID: 33379927 PMCID: PMC7775115 DOI: 10.1121/10.0002961] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/18/2020] [Revised: 11/06/2020] [Accepted: 12/03/2020] [Indexed: 06/12/2023]
Abstract
Listeners differ widely in the ability to follow the speech of a single talker in a noisy crowd-what is called the cocktail-party effect. Differences may arise for any one or a combination of factors associated with auditory sensitivity, selective attention, working memory, and decision making required for effective listening. The present study attempts to narrow the possibilities by grouping explanations into model classes based on model predictions for the types of errors that distinguish better from poorer performing listeners in a vowel segregation and talker identification task. Two model classes are considered: those for which the errors are predictably tied to the voice variation of talkers (decision weight models) and those for which the errors occur largely independently of this variation (internal noise models). Regression analyses of trial-by-trial responses, for different tasks and task demands, show overwhelmingly that the latter type of error is responsible for the performance differences among listeners. The results are inconsistent with models that attribute the performance differences to differences in the reliance listeners place on relevant voice features in this decision. The results are consistent instead with models for which largely stimulus-independent, stochastic processes cause information loss at different stages of auditory processing.
Collapse
Affiliation(s)
- Robert A Lutfi
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Briana Rodriguez
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Jungmee Lee
- Auditory Behavioral Research Lab, Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA
| | - Torben Pastore
- Spatial Hearing Lab, College of Health Solutions, Arizona State University, Tempe, Arizona 85281, USA
| |
Collapse
|
9
|
Liu Y, Ji F, Gong Q. Analyzing Stimulus-frequency Otoacoustic Emission Fine Structure Using an Additive Model . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:960-963. [PMID: 33018144 DOI: 10.1109/embc44109.2020.9175491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
A good understanding of the origin of stimulus-frequency otoacoustic emission (SFOAE) fine structure in human ears and its probe level-dependency has potential clinical significance. In this study, we develop a two-component additive model, with total SFOAE unmixed into short- and long-latency components (or reflections) using time windowing method, to investigate the origin of SFOAE fine structure in humans from 40 to 70 dB SPL. The two-component additive model predicts that a spectral notch seen in the amplitude fine structure is produced when short- and long-latency components have opposite phases and comparable magnitudes. And the depth of spectral notch is significantly correlated with the amplitude difference between the two separated components, as well as their degree of opposite phase. Our independent evidence for components contributing to SFOAE fine structure suggests that amplitude, phase and delay fine structure in the human SFOAEs are a construct of the complex addition of two or more internal reflections with different phase slops in the cochlea.
Collapse
|