1
|
Triwiyanto T. The Hearing Test App for Android Devices: Distinctive Features of Pure-Tone Audiometry Performed on Mobile Devices [Letter]. MEDICAL DEVICES-EVIDENCE AND RESEARCH 2024; 17:213-214. [PMID: 38826852 PMCID: PMC11141733 DOI: 10.2147/mder.s478423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Accepted: 05/23/2024] [Indexed: 06/04/2024] Open
Affiliation(s)
- T Triwiyanto
- Department of Medical Electronics Technology, Poltekkes Kemenkes Surabaya, Surabaya, Indonesia
| |
Collapse
|
2
|
Christensen JH, Rumley J, Gil-Carvajal JC, Whiston H, Lough M, Saunders GH. Predicting Individual Hearing-Aid Preference From Self-Reported Listening Experiences in Daily Life. Ear Hear 2024:00003446-990000000-00279. [PMID: 38783420 DOI: 10.1097/aud.0000000000001520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
OBJECTIVES The study compared the utility of two approaches for collecting real-world listening experiences to predict hearing-aid preference: a retrospective questionnaire (Speech, Spatial, and Qualities of Hearing Scale [SSQ]) and in-situ Ecological Momentary Assessment (EMA). The rationale being that each approach likely provides different and yet complementary information. In addition, it was examined how self-reported listening activity and hearing-aid data-logging can augment EMAs for individualized and contextualized hearing outcome assessments. DESIGN Experienced hearing-aid users (N = 40) with mild-to-moderate symmetrical sensorineural hearing loss completed the SSQ questionnaire and gave repeated EMAs for two wear periods of 2-weeks each with two different hearing-aid models that differed mainly in their noise reduction technology. The EMAs were linked to a self-reported listening activity and sound environment parameters (from hearing-aid data-logging) recorded at the time of EMA completion. Wear order was randomized by hearing-aid model. Linear mixed-effects models and Random Forest models with five-fold cross-validation were used to assess the statistical associations between listening experiences and end-of-trial preferences, and to evaluate how accurately EMAs predicted preference within individuals. RESULTS Only 6 of the 49 SSQ items significantly discriminated between responses made for the end-of-trial preferred versus nonpreferred hearing-aid model. For the EMAs, questions related to perception of the sound from the hearing aids were all significantly associated with preference, and these associations were strongest in EMAs completed in sound environments with predominantly low SNR and listening activities related to television, people talking, nonspecific listening, and music listening. Mean differences in listening experiences from SSQ and EMA correctly predicted preference in 71.8% and 72.5% of included participants, respectively. However, a prognostic classification of single EMAs into end-of-trial preference with a Random Forest model achieved a 93.8% accuracy when contextual information was included. CONCLUSIONS SSQ and EMA predicted preference equally well when considering mean differences, however, EMAs had a high prognostic classifications accuracy due to the repeated-measures nature, which make them ideal for individualized hearing outcome investigations, especially when responses are combined with contextual information about the sound environment.
Collapse
Affiliation(s)
| | - Johanne Rumley
- Oticon A/S, Centre for Applied Audiology Research; and Clinical Audiological Development, Smoerum, Denmark
| | - Juan Camilo Gil-Carvajal
- Oticon A/S, Centre for Applied Audiology Research; and Clinical Audiological Development, Smoerum, Denmark
| | - Helen Whiston
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| | - Melanie Lough
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| | - Gabrielle H Saunders
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
3
|
Tanveer MA, Skoglund MA, Bernhardsson B, Alickovic E. Deep learning-based auditory attention decoding in listeners with hearing impairment . J Neural Eng 2024; 21:036022. [PMID: 38729132 DOI: 10.1088/1741-2552/ad49d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 05/10/2024] [Indexed: 05/12/2024]
Abstract
Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.
Collapse
Affiliation(s)
- M Asjid Tanveer
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Martin A Skoglund
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linkoping, Sweden
| | - Bo Bernhardsson
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linkoping, Sweden
| |
Collapse
|
4
|
Zaar J, Simonsen LB, Laugesen S. A spectro-temporal modulation test for predicting speech reception in hearing-impaired listeners with hearing aids. Hear Res 2024; 443:108949. [PMID: 38281473 DOI: 10.1016/j.heares.2024.108949] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Revised: 12/15/2023] [Accepted: 01/03/2024] [Indexed: 01/30/2024]
Abstract
Spectro-temporal modulation (STM) detection sensitivity has been shown to be associated with speech-in-noise reception in hearing-impaired (HI) individuals. Based on previous research, a recent study [Zaar, Simonsen, Dau, and Laugesen (2023). Hear Res 427:108650] introduced an STM test paradigm with audibility compensation, employing STM stimulus variants using noise and complex tones as carrier signals. The study demonstrated that the test was suitable for the target population of elderly individuals with moderate-to-severe hearing loss and showed promising predictions of speech-reception thresholds (SRTs) measured in a realistic set up with spatially distributed speech and noise maskers and linear audibility compensation. The present study further investigated the suggested STM test with respect to (i) test-retest variability for the most promising STM stimulus variants, (ii) its predictive power with respect to realistic speech-in-noise reception with non-linear hearing-aid amplification, (iii) its connection to effects of directionality and noise reduction (DIR+NR) hearing-aid processing, and (iv) its relation to DIR+NR preference. Thirty elderly HI participants were tested in a combined laboratory and field study, collecting STM thresholds with a complex-tone based and a noise-based STM stimulus design, SRTs with spatially distributed speech and noise maskers using hearing aids with non-linear amplification and two different levels of DIR+NR, as well as subjective reports and preference ratings obtained in two field periods with the two DIR+NR hearing-aid settings. The results indicate that the noise-carrier based STM test variant (i) showed optimal test-retest properties, (ii) yielded a highly significant correlation with SRTs (R2=0.61) exceeding and complementing the predictive power of the audiogram, (iii) yielded significant correlation (R2=0.51) with the DIR+NR-induced SRT benefit, and (iv) did not provide significant correlation with subjective preference for DIR+NR settings in the field. Overall, the suggested STM test represents a valuable tool for diagnosing speech-reception problems that remain when hearing-aid amplification has been provided and the resulting need for and benefit from DIR+NR hearing-aid processing.
Collapse
Affiliation(s)
- Johannes Zaar
- Eriksholm Research Centre, DK-3070 Snekkersten, Denmark; Hearing Systems Section, Department of Health Technology,Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark.
| | - Lisbeth Birkelund Simonsen
- Hearing Systems Section, Department of Health Technology,Technical University of Denmark, DK-2800 Kgs. Lyngby, Denmark; Interacoustics Research Unit, DK-2800, Kgs. Lyngby, Denmark
| | - Søren Laugesen
- Interacoustics Research Unit, DK-2800, Kgs. Lyngby, Denmark
| |
Collapse
|
5
|
Christensen JH, Whiston H, Lough M, Gil-Carvajal JC, Rumley J, Saunders GH. Evaluating Real-World Benefits of Hearing Aids With Deep Neural Network-Based Noise Reduction: An Ecological Momentary Assessment Study. Am J Audiol 2024; 33:1-12. [PMID: 38354098 DOI: 10.1044/2023_aja-23-00149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2024] Open
Abstract
PURPOSE Noise reduction technologies in hearing aids provide benefits under controlled conditions. However, differences in their real-life effectiveness are not established. We propose that a deep neural network (DNN)-based noise reduction system trained on naturalistic sound environments will provide different real-life benefits compared to traditional systems. METHOD Real-life listening experiences collected with Ecological Momentary Assessments (EMAs) of participants who used two premium models of hearing aid are compared. One hearing aid model (HA1) used traditional noise reduction; the other hearing aid model (HA2) used DNN-based noise reduction. Participants reported listening experiences several times a day while ambient SPL, SNR, and hearing aid volume adjustments were recorded. Forty experienced hearing aid users completed a total of 3,614 EMAs and recorded 6,812 hr of sound data across two 14-day wear periods. RESULTS Linear mixed-effects analysis document that participants' assessments of ambient noisiness were positively associated with SPL and negatively associated with SNR but were not otherwise affected by hearing aid model. Likewise, mean satisfaction with the two models did not differ. However, individual satisfaction ratings for HA1 were dependent on ambient SNR, which was not the case for HA2. CONCLUSIONS Hearing aids with DNN-based noise reduction resulted in consistent sound satisfaction regardless of the level of background noise compared to hearing aids implementing noise reduction based on traditional statistical models. While the two hearing aid models also differed on other parameters (e.g., shape), these differences are unlikely to explain the difference in how background noise impacts sound satisfaction with the aids. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25114526.
Collapse
Affiliation(s)
| | - Helen Whiston
- Manchester Centre for Audiology and Deafness, School of Health Sciences, The University of Manchester, United Kingdom
| | - Melanie Lough
- Manchester Centre for Audiology and Deafness, School of Health Sciences, The University of Manchester, United Kingdom
| | | | | | - Gabrielle H Saunders
- Manchester Centre for Audiology and Deafness, School of Health Sciences, The University of Manchester, United Kingdom
| |
Collapse
|
6
|
Bachmann FL, Kulasingham JP, Eskelund K, Enqvist M, Alickovic E, Innes-Brown H. Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. Trends Hear 2024; 28:23312165241246596. [PMID: 38738341 DOI: 10.1177/23312165241246596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
Collapse
Affiliation(s)
| | - Joshua P Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
7
|
Diehl PU, Singer Y, Zilly H, Schönfeld U, Meyer-Rachner P, Berry M, Sprekeler H, Sprengel E, Pudszuhn A, Hofmann VM. Restoring speech intelligibility for hearing aid users with deep learning. Sci Rep 2023; 13:2719. [PMID: 36792797 PMCID: PMC9932078 DOI: 10.1038/s41598-023-29871-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2022] [Accepted: 02/11/2023] [Indexed: 02/17/2023] Open
Abstract
Almost half a billion people world-wide suffer from disabling hearing loss. While hearing aids can partially compensate for this, a large proportion of users struggle to understand speech in situations with background noise. Here, we present a deep learning-based algorithm that selectively suppresses noise while maintaining speech signals. The algorithm restores speech intelligibility for hearing aid users to the level of control subjects with normal hearing. It consists of a deep network that is trained on a large custom database of noisy speech signals and is further optimized by a neural architecture search, using a novel deep learning-based metric for speech intelligibility. The network achieves state-of-the-art denoising on a range of human-graded assessments, generalizes across different noise categories and-in contrast to classic beamforming approaches-operates on a single microphone. The system runs in real time on a laptop, suggesting that large-scale deployment on hearing aid chips could be achieved within a few years. Deep learning-based denoising therefore holds the potential to improve the quality of life of millions of hearing impaired people soon.
Collapse
Affiliation(s)
- Peter Udo Diehl
- Audatic, Berlin, Friedrichstr. 210, 10117, Berlin, Germany. .,Department of Otorhinolaryngology, Head and Neck Surgery, Charité-Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Benjamin Franklin, Berlin, Germany.
| | - Yosef Singer
- Audatic, Berlin, Friedrichstr. 210, 10117 Berlin, Germany
| | - Hannes Zilly
- Audatic, Berlin, Friedrichstr. 210, 10117 Berlin, Germany
| | - Uwe Schönfeld
- grid.6363.00000 0001 2218 4662Department of Otorhinolaryngology, Head and Neck Surgery, Charité-Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Benjamin Franklin, Berlin, Germany
| | | | - Mark Berry
- Audatic, Berlin, Friedrichstr. 210, 10117 Berlin, Germany
| | - Henning Sprekeler
- grid.6734.60000 0001 2292 8254Department for Electrical Engineering and Computer Science, Technische Universität Berlin, Berlin, Germany ,grid.455089.50000 0004 0456 0961Bernstein Center for Computational Neuroscience Berlin, Philippstr. 13, 10115 Berlin, Germany ,grid.6734.60000 0001 2292 8254Exzellenzcluster Science of Intelligence, Technische Universität Berlin, Marchstr. 23, 10587 Berlin, Germany
| | - Elias Sprengel
- Audatic, Berlin, Friedrichstr. 210, 10117 Berlin, Germany
| | - Annett Pudszuhn
- grid.6363.00000 0001 2218 4662Department of Otorhinolaryngology, Head and Neck Surgery, Charité-Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Benjamin Franklin, Berlin, Germany
| | - Veit M. Hofmann
- grid.6363.00000 0001 2218 4662Department of Otorhinolaryngology, Head and Neck Surgery, Charité-Universitätsmedizin Berlin, Freie Universität Berlin, Humboldt-Universität zu Berlin, and Berlin Institute of Health, Campus Benjamin Franklin, Berlin, Germany
| |
Collapse
|
8
|
Green T, Hilkhuysen G, Huckvale M, Rosen S, Brookes M, Moore A, Naylor P, Lightburn L, Xue W. Speech recognition with a hearing-aid processing scheme combining beamforming with mask-informed speech enhancement. Trends Hear 2022; 26:23312165211068629. [PMID: 34985356 PMCID: PMC8744079 DOI: 10.1177/23312165211068629] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
A signal processing approach combining beamforming with mask-informed speech enhancement was assessed by measuring sentence recognition in listeners with mild-to-moderate hearing impairment in adverse listening conditions that simulated the output of behind-the-ear hearing aids in a noisy classroom. Two types of beamforming were compared: binaural, with the two microphones of each aid treated as a single array, and bilateral, where independent left and right beamformers were derived. Binaural beamforming produces a narrower beam, maximising improvement in signal-to-noise ratio (SNR), but eliminates the spatial diversity that is preserved in bilateral beamforming. Each beamformer type was optimised for the true target position and implemented with and without additional speech enhancement in which spectral features extracted from the beamformer output were passed to a deep neural network trained to identify time-frequency regions dominated by target speech. Additional conditions comprising binaural beamforming combined with speech enhancement implemented using Wiener filtering or modulation-domain Kalman filtering were tested in normally-hearing (NH) listeners. Both beamformer types gave substantial improvements relative to no processing, with significantly greater benefit for binaural beamforming. Performance with additional mask-informed enhancement was poorer than with beamforming alone, for both beamformer types and both listener groups. In NH listeners the addition of mask-informed enhancement produced significantly poorer performance than both other forms of enhancement, neither of which differed from the beamformer alone. In summary, the additional improvement in SNR provided by binaural beamforming appeared to outweigh loss of spatial information, while speech understanding was not further improved by the mask-informed enhancement method implemented here.
Collapse
Affiliation(s)
- Tim Green
- Department of Speech, Hearing and Phonetic Sciences, 4919UCL, London, UK
| | - Gaston Hilkhuysen
- Department of Speech, Hearing and Phonetic Sciences, 4919UCL, London, UK
| | - Mark Huckvale
- Department of Speech, Hearing and Phonetic Sciences, 4919UCL, London, UK
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, 4919UCL, London, UK
| | - Mike Brookes
- Department of Electrical and Electronic Engineering, 4615Imperial College, London, UK
| | - Alastair Moore
- Department of Electrical and Electronic Engineering, 4615Imperial College, London, UK
| | - Patrick Naylor
- Department of Electrical and Electronic Engineering, 4615Imperial College, London, UK
| | - Leo Lightburn
- Department of Electrical and Electronic Engineering, 4615Imperial College, London, UK
| | - Wei Xue
- Department of Electrical and Electronic Engineering, 4615Imperial College, London, UK
| |
Collapse
|