1
|
Dapper K, Wolpert SM, Schirmer J, Fink S, Gaudrain E, Başkent D, Singer W, Verhulst S, Braun C, Dalhoff E, Rüttiger L, Munk MHJ, Knipper M. Age dependent deficits in speech recognition in quiet and noise are reflected in MGB activity and cochlear onset coding. Neuroimage 2025; 305:120958. [PMID: 39622462 DOI: 10.1016/j.neuroimage.2024.120958] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 11/25/2024] [Accepted: 11/26/2024] [Indexed: 12/21/2024] Open
Abstract
The slowing and reduction of auditory responses in the brain are recognized side effects of increased pure tone thresholds, impaired speech recognition, and aging. However, it remains controversial whether central slowing is primarily linked to brain processes as atrophy, or is also associated with the slowing of temporal neural processing from the periphery. Here we analyzed electroencephalogram (EEG) responses that most likely reflect medial geniculate body (MGB) responses to passive listening of phonemes in 80 subjects ranging in age from 18 to 76 years, in whom the peripheral auditory responses had been analyzed in detail (Schirmer et al., 2024). We observed that passive listening to vowels and phonemes, specifically designed to rely on either temporal fine structure (TFS) for frequencies below the phase locking limit (<1500 Hz), or on the temporal envelope (TENV) for frequencies above phase locking limit, entrained lower or higher neural EEG responses. While previous views predict speech content, particular in noise to be encoded through TENV, here a decreasing phoneme-induced EEG amplitude over age in response to phonemes relying on TENV coding could also be linked to poorer speech-recognition thresholds in quiet. In addition, increased phoneme-evoked EEG delay could be correlated with elevated extended high-frequency threshold (EHF) for phoneme changes that relied on TFS and TENV coding. This may suggest a role of pure-tone threshold averages (PTA) of EHF for TENV and TFS beyond sound localization that is reflected in likely MGB delays. When speech recognition thresholds were normalized for pure-tone thresholds, however, the EEG amplitudes remained insignificant, and thereby became independent of age. Under these conditions, poor speech recognition in quiet was found together with a delay in EEG response for phonemes that relied on TFS coding, while poor speech recognition in ipsilateral noise was observed as a trend of shortened EEG delays for phonemes that relied on TENV coding. Based on previous analyses performed in these same subjects, elevated thresholds in extended high-frequency regions were linked to cochlear synaptopathy and auditory brainstem delays. Also, independent of hearing loss, poor speech-performing groups in quiet or with ipsilateral noise during TFS or TENV coding could be linked to lower or better outer hair cell performance and delayed or steeper auditory nerve responses at stimulus onset. The amplitude and latency of MGB responses to phonemes requiring TFS or TENV coding, dependent or independent of hearing loss, may thus be a new predictor of poor speech recognition in quiet and ipsilateral noise that links deficits in synchronicity at stimulus onset to neocortical activity. Amplitudes and delays of speech EEG responses to syllables should be reconsidered for future hearing-aid studies.
Collapse
Affiliation(s)
- Konrad Dapper
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany; Department of Biology, Technical University 64287 Darmstadt, Darmstadt, Germany
| | - Stephan M Wolpert
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Jakob Schirmer
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Stefan Fink
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Etienne Gaudrain
- Lyon Neuroscience Research Center, Université Claude Bernard Lyon 1, CNRS UMR5292, INSERM U1028, Center Hospitalier Le Vinatier -Bâtiment 462-Neurocampus, 95 boulevard Pinel, Lyon, France
| | - Deniz Başkent
- Department of Otorhinolaryngology, University Medical Center Groningen (UMCG), Hanzeplein 1, BB21, Groningen 9700RB, the Netherlands
| | - Wibke Singer
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Sarah Verhulst
- Department of Information Technology, Ghent University, Zwijnaarde 9052, Belgium
| | - Christoph Braun
- MEG-Center, University of Tübingen, Tübingen 72076, Germany; HIH, Hertie Institute for Clinical Brain Research, Tübingen 72076, Germany; CIMeC, Center for Mind and Brain Research, University of Trento, Rovereto 38068, Italy
| | - Ernst Dalhoff
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Lukas Rüttiger
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany
| | - Matthias H J Munk
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany; Department of Biology, Technical University 64287 Darmstadt, Darmstadt, Germany
| | - Marlies Knipper
- Department of Otolaryngology, Head and Neck, University of Tübingen, Tübingen 72076, Germany.
| |
Collapse
|
2
|
Dieudonné B, Decruy L, Vanthornhout J. Neural tracking of the speech envelope predicts binaural unmasking. Eur J Neurosci 2025; 61:e16638. [PMID: 39653384 DOI: 10.1111/ejn.16638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2024] [Revised: 10/31/2024] [Accepted: 11/26/2024] [Indexed: 12/24/2024]
Abstract
Binaural unmasking is a remarkable phenomenon that it is substantially easier to detect a signal in noise when the interaural parameters of the signal are different from those of the noise - a useful mechanism in so-called cocktail party scenarios. In this study, we investigated the effect of binaural unmasking on neural tracking of the speech envelope. We measured EEG in 8 participants who listened to speech in noise at a fixed signal-to-noise ratio, in two conditions: one where speech and noise had the same interaural phase difference (both speech and noise having an opposite waveform across ears, SπNπ), and one where the interaural phase difference of the speech was different from that of the noise (only the speech having an opposite waveform across ears, SπN). We measured a clear benefit of binaural unmasking in behavioural speech understanding scores, accompanied by increased neural tracking of the speech envelope. Moreover, analysing the temporal response functions revealed that binaural unmasking also resulted in decreased peak latencies and increased peak amplitudes. Our results are consistent with previous research using auditory evoked potentials and steady-state responses to quantify binaural unmasking at cortical levels. Moreover, they confirm that neural tracking of speech is associated with speech understanding, even if the acoustic signal-to-noise ratio is kept constant. From a clinical perspective, these results offer the potential for the objective evaluation of binaural speech understanding mechanisms, and the objective detection of pathologies sensitive to binaural processing, such as asymmetric hearing loss, auditory neuropathy and age-related deficits.
Collapse
Affiliation(s)
- Benjamin Dieudonné
- Experimental Otorhinolaryngology, Department of Neurosciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Lien Decruy
- Experimental Otorhinolaryngology, Department of Neurosciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Jonas Vanthornhout
- Experimental Otorhinolaryngology, Department of Neurosciences, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
3
|
Wang S, Wong LLN, Chen Y. Development of the mandarin reading span test and confirmation of its relationship with speech perception in noise. Int J Audiol 2024; 63:1009-1018. [PMID: 38270384 DOI: 10.1080/14992027.2024.2305685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/24/2023] [Accepted: 01/08/2024] [Indexed: 01/26/2024]
Abstract
OBJECTIVE This study aimed to develop a dual-task Mandarin Reading Span Test (RST) to assess verbal working memory related to speech perception in noise. DESIGN The test material was developed taking into account psycholinguistic factors (i.e. sentence structure, number of syllables, word familiarity, and sentences plausibility), to achieve good test reliability and face validity. The relationship between the 28-sentence Mandarin RST and speech perception in noise was confirmed using three speech perception in noise measures containing varying levels of contextual and linguistic information. STUDY SAMPLE The study comprised 42 young adults with normal hearing and 56 older adult who were hearing aid users with moderate to severe hearing loss. RESULTS In older hearing aid users, the 28-sentence RST showed significant correlation with speech reception thresholds as measured by three Mandarin sentence in noise tests (rs or r = -.681 to -.419) but not with the 2-digit sequence Digit-in-Noise Test. CONCLUSION The newly developed dual-task Mandarin RST, constructed with careful psycholinguistic consideration, demonstrates a significant relationship with sentence perception in noise. This suggests that the Mandarin RST could serve as a measure of verbal working memory.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Lena L N Wong
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Yuan Chen
- Department of Special Education and Counselling, Integrated Center for Wellbeing (I-WELL), The Education University of Hong Kong, Taipo, New Territories, China
| |
Collapse
|
4
|
Astefanei O, Cozma S, Martu C, Serban R, Butnaru C, Moraru P, Musat G, Radulescu L. Measuring Speech Intelligibility with Romanian Synthetic Unpredictable Sentences in Normal Hearing. Audiol Res 2024; 14:1028-1044. [PMID: 39727608 DOI: 10.3390/audiolres14060085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 11/20/2024] [Accepted: 11/29/2024] [Indexed: 12/28/2024] Open
Abstract
BACKGROUND/OBJECTIVES Understanding speech in background noise is a challenging task for listeners with normal hearing and even more so for individuals with hearing impairments. The primary objective of this study was to develop Romanian speech material in noise to assess speech perception in diverse auditory populations, including individuals with normal hearing and those with various types of hearing loss. The goal was to create a versatile tool that can be used in different configurations and expanded for future studies examining auditory performance across various populations and rehabilitation methods. METHODS This study outlines the development of Romanian speech material for speech-in-noise testing, initially presented to normal-hearing listeners to establish baseline data. The material consisted of unpredictable sentences, each with a fixed syntactic structure, generated using speech synthesis from all Romanian phonemes. A total of 50 words were selected and organized into 15 lists, each containing 10 sentences, with five words per sentence. Two evaluation methods were applied in two sessions to 20 normal-hearing volunteers. The first method was an adaptive speech-in-noise recognition test designed to assess the speech recognition threshold (SRT) by adjusting the signal-to-noise ratio (SNR) based on individual performance. The intelligibility of the lists was further assessed at the sentence level to evaluate the training effect. The second method was used to obtain normative data for the SRT, defined as the SNR at which a subject correctly recognizes 50% of the speech material, as well as for the slope, which refers to the steepness of the psychometric function derived from threshold recognition scores measured at three fixed SNRs (-10 dB, -7 dB, and -4 dB) during the measurement phase. RESULTS The adaptive method showed that the training effect was established after two lists and remained consistent across both sessions. During the measurement phase, the fixed SNR method yielded a mean SRT50 of -7.38 dB with a slope of 11.39%. These results provide reliable and comparable data, supporting the validity of the material for both general population testing and future clinical applications. CONCLUSIONS This study demonstrates that the newly developed Romanian speech material is effective for evaluating speech recognition abilities in noise. The training phase successfully mitigated initial unfamiliarity with the material, ensuring that the results reflect realistic auditory performance. The obtained SRT and slope values provide valuable normative data for future auditory assessments. Due to its flexible design, the material can be further developed and extended to accommodate various auditory rehabilitation methods and diverse populations in future studies.
Collapse
Affiliation(s)
- Oana Astefanei
- Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
| | - Sebastian Cozma
- Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Cristian Martu
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Roxana Serban
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Corina Butnaru
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Petronela Moraru
- Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| | - Gabriela Musat
- Department of Otorhinolaryngology, "Carol Davila" University of Medicine and Pharmacy, 050474 Bucharest, Romania
| | - Luminita Radulescu
- Doctoral School, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
- ENT Clinic Department, Clinical Rehabilitation Hospital, 700661 Iasi, Romania
- Department of Otorhinolaryngology, Faculty of Medicine, Grigore T Popa University of Medicine and Pharmacy, 700115 Iasi, Romania
| |
Collapse
|
5
|
Fatehifar M, Schlittenlacher J, Almufarrij I, Wong D, Cootes T, Munro KJ. Applications of automatic speech recognition and text-to-speech technologies for hearing assessment: a scoping review. Int J Audiol 2024:1-12. [PMID: 39530742 DOI: 10.1080/14992027.2024.2422390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2023] [Revised: 10/18/2024] [Accepted: 10/23/2024] [Indexed: 11/16/2024]
Abstract
OBJECTIVE Exploring applications of automatic speech recognition and text-to-speech technologies in hearing assessment and evaluations of hearing aids. DESIGN Review protocol was registered at the INPLASY database and was performed following the PRISMA scoping review guidelines. A search in ten databases was conducted in January 2023 and updated in June 2024. STUDY SAMPLE Studies that used automatic speech recognition or text-to-speech to assess measures of hearing ability (e.g. speech reception threshold), or to configure hearing aids were retrieved. Of the 2942 records found, 28 met the inclusion criteria. RESULTS The results indicated that text-to-speech could effectively replace recorded stimuli in speech intelligibility tests, requiring less effort for experimenters, without negatively impacting outcomes (n = 5). Automatic speech recognition captured verbal responses accurately, allowing for reliable speech reception threshold measurements without human supervision (n = 7). Moreover, automatic speech recognition was employed to simulate participants' hearing, with high correlations between simulated and empirical data (n = 14). Finally, automatic speech recognition was used to optimise hearing aid configurations, leading to higher speech intelligibility for wearers compared to the original configuration (n = 3). CONCLUSIONS There is the potential for automatic speech recognition and text-to-speech systems to enhance accessibility of, and efficiency in, hearing assessments, offering unsupervised testing options, and facilitating hearing aid personalisation.
Collapse
Affiliation(s)
- Mohsen Fatehifar
- Manchester Centre for Audiology and Deafness (ManCAD), School of Health Sciences, University of Manchester, Manchester, UK
| | - Josef Schlittenlacher
- Department of Speech, Hearing and Phonetic Sciences, University College London, London, UK
| | - Ibrahim Almufarrij
- Manchester Centre for Audiology and Deafness (ManCAD), School of Health Sciences, University of Manchester, Manchester, UK
- Department of Rehabilitation Sciences, College of Applied Medical Sciences, King Saud University, Riyadh, Saudi Arabia
| | - David Wong
- Leeds Institute of Health Sciences, University of Leeds, Leeds, UK
| | - Tim Cootes
- Centre for Imaging Sciences, University of Manchester, Manchester, UK
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness (ManCAD), School of Health Sciences, University of Manchester, Manchester, UK
- Manchester Academic Health Science Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| |
Collapse
|
6
|
Winkler A, Warkentin L, Denk F, Husstedt H, Sankowksy-Rothe T, Blau M, Holube I. Reference Speech-recognition curves for a German monosyllabic test in noise: effects of loudspeaker configuration and room acoustics. Int J Audiol 2024:1-10. [PMID: 39508524 DOI: 10.1080/14992027.2024.2401519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 08/31/2024] [Accepted: 09/02/2024] [Indexed: 11/15/2024]
Abstract
OBJECTIVE Measurement of reference speech-recognition curves for a specific speech test in typical clinical testing environments and for different loudspeaker configurations. DESIGN Speech-recognition scores were measured at four signal-to-noise ratios for five loudspeaker configurations in two anechoic rooms, and in four audiometric test rooms with low reverberation times. STUDY SAMPLE 240 young participants (aged 18-25 years) without hearing impairment participated in the measurements. RESULTS Reference speech-recognition curves for speech and noise from the front (S0N0) were similar across rooms. Compared to S0N0, lower speech-recognition thresholds (SRTs) were observed for all other loudspeaker configurations in which speech and noise were spatially separated. This spatial release from masking was significantly reduced for the audiometric test rooms compared to the anechoic rooms. A binaural speech-intelligibility model was used to verify the influence of room acoustic properties and loudspeaker configuration on SRT. CONCLUSIONS Speech-recognition curves for spatially separated loudspeaker configurations depend on the room acoustic properties, even in audiometric test rooms with low reverberation times. This makes it more difficult to compare clinical measurements with reference speech-recognition curves, or even with data measured in a different test room. It is thus recommended to document the loudspeaker configuration and test room for each clinical measurement.
Collapse
Affiliation(s)
- Alexandra Winkler
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | | | - Florian Denk
- German Institute of Hearing Aids, Lübeck, Germany
| | | | - Tobias Sankowksy-Rothe
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | - Matthias Blau
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
7
|
Zaar J, Simonsen LB, Sanchez-Lopez R, Laugesen S. The Audible Contrast Threshold (ACT) test: A clinical spectro-temporal modulation detection test. Hear Res 2024; 453:109103. [PMID: 39243488 DOI: 10.1016/j.heares.2024.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/12/2024] [Accepted: 08/12/2024] [Indexed: 09/09/2024]
Abstract
Over the last decade, multiple studies have shown that hearing-impaired listeners' speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise "waves" (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the "normalized Contrast Level" (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.
Collapse
Affiliation(s)
- Johannes Zaar
- Eriksholm Research Centre, Rørtangvej 20, 3070 Snekkersten, Denmark; Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| | - Lisbeth Birkelund Simonsen
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Raul Sanchez-Lopez
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Institute of Globally Distributed Open Research and Education (IGDORE)
| | - Søren Laugesen
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| |
Collapse
|
8
|
De Poortere N, Keshishzadeh S, Keppler H, Dhooge I, Verhulst S. Intrasubject variability in potential early markers of sensorineural hearing damage. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3480-3495. [PMID: 39565141 DOI: 10.1121/10.0034423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2024] [Accepted: 10/27/2024] [Indexed: 11/21/2024]
Abstract
The quest for noninvasive early markers for sensorineural hearing loss (SNHL) has yielded diverse measures of interest. However, comprehensive studies evaluating the test-retest reliability of multiple measures and stimuli within a single study are scarce, and a standardized clinical protocol for robust early markers of SNHL remains elusive. To address these gaps, this study explores the intra-subject variability of various potential electroencephalogram- (EEG-) biomarkers for cochlear synaptopathy (CS) and other SNHL-markers in the same individuals. Fifteen normal-hearing young adults underwent repeated measures of (extended high-frequency) pure-tone audiometry, speech-in-noise intelligibility, distortion-product otoacoustic emissions (DPOAEs), and auditory evoked potentials; comprising envelope following responses (EFR) and auditory brainstem responses (ABR). Results confirm high reliability in pure-tone audiometry, whereas the matrix sentence-test exhibited a significant learning effect. The reliability of DPOAEs varied across three evaluation methods, each employing distinct SNR-based criteria for DPOAE-datapoints. EFRs exhibited superior test-retest reliability compared to ABR-amplitudes. Our findings emphasize the need for careful interpretation of presumed noninvasive SNHL measures. While tonal-audiometry's robustness was corroborated, we observed a confounding learning effect in longitudinal speech audiometry. The variability in DPOAEs highlights the importance of consistent ear probe replacement and meticulous measurement techniques, indicating that DPOAE test-retest reliability is significantly compromised under less-than-ideal conditions. As potential EEG-biomarkers of CS, EFRs are preferred over ABR-amplitudes based on the current study results.
Collapse
Affiliation(s)
- Nele De Poortere
- Department of Rehabilitation Sciences-Audiology, Ghent University, Ghent, Belgium
| | - Sarineh Keshishzadeh
- Department of Information Technology-Hearing Technology @ WAVES, Ghent University, Ghent, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences-Audiology, Ghent University, Ghent, Belgium
- Department of Head and Skin, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Ear, Nose and Throat, Ghent University Hospital, Ghent, Belgium
| | - Sarah Verhulst
- Department of Information Technology-Hearing Technology @ WAVES, Ghent University, Ghent, Belgium
| |
Collapse
|
9
|
Stronks HC, Tops AL, Quach KW, Briaire JJ, Frijns JHM. Listening Effort Measured With Pupillometry in Cochlear Implant Users Depends on Sound Level, But Not on the Signal to Noise Ratio When Using the Matrix Test. Ear Hear 2024; 45:1461-1473. [PMID: 38886888 PMCID: PMC11486951 DOI: 10.1097/aud.0000000000001529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2023] [Accepted: 04/28/2024] [Indexed: 06/20/2024]
Abstract
OBJECTIVES We investigated whether listening effort is dependent on task difficulty for cochlear implant (CI) users when using the Matrix speech-in-noise test. To this end, we measured peak pupil dilation (PPD) at a wide range of signal to noise ratios (SNR) by systematically changing the noise level at a constant speech level, and vice versa. DESIGN A group of mostly elderly CI users performed the Dutch/Flemish Matrix test in quiet and in multitalker babble at different SNRs. SNRs were set relative to the speech-recognition threshold (SRT), namely at SRT, and 5 and 10 dB above SRT (0 dB, +5 dB, and +10 dB re SRT). The latter 2 conditions were obtained by either varying speech level (at a fixed noise level of 60 dBA) or by varying noise level (with a fixed speech level). We compared these PPDs with those of a group of typical hearing (TH) listeners. In addition, listening effort was assessed with subjective ratings on a Likert scale. RESULTS PPD for the CI group did not significantly depend on SNR, whereas SNR significantly affected PPDs for TH listeners. Subjective effort ratings depended significantly on SNR for both groups. For CI users, PPDs were significantly larger, and effort was rated higher when speech was varied, and noise was fixed for CI users. By contrast, for TH listeners effort ratings were significantly higher and performance scores lower when noise was varied, and speech was fixed. CONCLUSIONS The lack of a significant effect of varying SNR on PPD suggests that the Matrix test may not be a feasible speech test for measuring listening effort with pupillometric measures for CI users. A rating test appeared more promising in this population, corroborating earlier reports that subjective measures may reflect different dimensions of listening effort than pupil dilation. Establishing the SNR by varying speech or noise level can have subtle, but significant effects on measures of listening effort, and these effects can differ between TH listeners and CI users.
Collapse
Affiliation(s)
- Hendrik Christiaan Stronks
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, Leiden, the Netherlands
| | - Annemijn Laura Tops
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Kwong Wing Quach
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Jeroen Johannes Briaire
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
| | - Johan Hubertus Maria Frijns
- Department of Otorhinolaryngology and Head & Neck surgery, Leiden University Medical Center, Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, Leiden, the Netherlands
- Department of Bioelectronics, Delft University of Technology, Delft, the Netherlands
| |
Collapse
|
10
|
Giovanelli E, Valzolgher C, Gessa E, Rosi T, Visentin C, Prodi N, Pavani F. Metacognition for hearing in noise: a comparison between younger and older adults. NEUROPSYCHOLOGY, DEVELOPMENT, AND COGNITION. SECTION B, AGING, NEUROPSYCHOLOGY AND COGNITION 2024; 31:869-890. [PMID: 37971362 DOI: 10.1080/13825585.2023.2281691] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 11/03/2023] [Indexed: 11/19/2023]
Abstract
Metacognition entails knowledge of one's own cognitive skills, perceived self-efficacy and locus of control when performing a task, and performance monitoring. Age-related changes in metacognition have been observed in metamemory, whereas their occurrence for hearing remained unknown. We tested 30 older and 30 younger adults with typical hearing, to assess if age reduces metacognition for hearing sentences in noise. Metacognitive monitoring for older and younger adults was overall comparable. In fact, the older group achieved better monitoring for words in the second part of the phrase. Additionally, only older adults showed a correlation between performance and perceived confidence. No age differentiation was found for locus of control, knowledge or self-efficacy. This suggests intact metacognitive skills for hearing in noise in older adults, alongside a somewhat paradoxical overconfidence in younger adults. These findings support exploiting metacognition for older adults dealing with noisy environments, since metacognition is central for implementing self-regulation strategies.
Collapse
Affiliation(s)
- Elena Giovanelli
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Tommaso Rosi
- Department of Physics, University of Trento, Trento, Italy
| | - Chiara Visentin
- Acoustics Research Group, Department of Engineering, University of Ferrara, Ferrara, Italy
| | - Nicola Prodi
- Acoustics Research Group, Department of Engineering, University of Ferrara, Ferrara, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" - CIRCLeS, Trento, Italy
| |
Collapse
|
11
|
Valzolgher C, Federici A, Giovanelli E, Gessa E, Bottari D, Pavani F. Continuous tracking of effort and confidence while listening to speech-in-noise in young and older adults. Conscious Cogn 2024; 124:103747. [PMID: 39213729 DOI: 10.1016/j.concog.2024.103747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2024] [Revised: 08/21/2024] [Accepted: 08/23/2024] [Indexed: 09/04/2024]
Abstract
Reporting discomfort when noise affects listening experience suggests that listeners may be aware, at least to some extent, of adverse environmental conditions and their impact on listening experience. This involves monitoring internal states (effort and confidence). Here we quantified continuous self-report indices that track one's own internal states and investigated age-related differences in this ability. We instructed two groups of young and older adults to continuously report their confidence and effort while listening to stories in fluctuating noise. Using cross-correlation analyses between the time series of fluctuating noise and those of perceived effort or confidence, we showed that (1) participants modified their assessment of effort and confidence based on variations in the noise, with a 4 s lag; (2) there were no differences between the groups. These findings imply extending this method to other areas, expanding the definition of metacognition, and highlighting the value of this ability for older adults.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy.
| | | | - Elena Giovanelli
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
| | | | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy; Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" - CIRCLeS, Trento, Italy
| |
Collapse
|
12
|
Saak S, Kothe A, Buhl M, Kollmeier B. Comparison of user interfaces for measuring the matrix sentence test on a smartphone. Int J Audiol 2024:1-13. [PMID: 39126397 DOI: 10.1080/14992027.2024.2385551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Revised: 07/19/2024] [Accepted: 07/24/2024] [Indexed: 08/12/2024]
Abstract
OBJECTIVE Smartphone-based self-testing could facilitate large-scale data collection and remote diagnostics. For this purpose, the matrix sentence test (MST) is an ideal candidate due to its repeatability and accuracy. In clinical practice, the MST requires professional audiological equipment and supervision, which is infeasible for smartphone-based self-testing. Therefore, it is crucial to investigate the feasibility of self-administering the MST on smartphones, including the development of an appropriate user interface for the small screen size. DESIGN We compared the traditional closed matrix user interface (10 × 5 matrix) to three alternative, newly-developed interfaces (slide, type, wheel) regarding SRT consistency, user preference, and completion time. STUDY SAMPLE We included 15 younger normal hearing and 14 older hearing-impaired participants in our study. RESULTS The slide interface is most suitable for mobile implementation, providing consistent and fast SRTs and enabling all participants to perform the tasks effectively. While the traditional matrix interface works well for most participants, some participants experienced difficulties due to its small size on the screen. CONCLUSIONS We propose the newly-introduced slide interface as a plausible alternative for smartphone screens. This might be more attractive for elderly patients that may exhibit more challenges with dexterity and vision than our test subjects employed here.
Collapse
Affiliation(s)
- Samira Saak
- Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Angelika Kothe
- Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Mareike Buhl
- Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l'Audition, Paris, France
| | - Birger Kollmeier
- Department of Medical Physics and Acoustics, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
13
|
Colak H, Sendesen E, Turkyilmaz MD. Subcortical auditory system in tinnitus with normal hearing: insights from electrophysiological perspective. Eur Arch Otorhinolaryngol 2024; 281:4133-4142. [PMID: 38555317 PMCID: PMC11266230 DOI: 10.1007/s00405-024-08583-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 02/26/2024] [Indexed: 04/02/2024]
Abstract
PURPOSE The mechanism of tinnitus remains poorly understood; however, studies have underscored the significance of the subcortical auditory system in tinnitus perception. In this study, our aim was to investigate the subcortical auditory system using electrophysiological measurements in individuals with tinnitus and normal hearing. Additionally, we aimed to assess speech-in-noise (SiN) perception to determine whether individuals with tinnitus exhibit SiN deficits despite having normal-hearing thresholds. METHODS A total 42 normal-hearing participants, including 22 individuals with chronic subjective tinnitus and 20 normal individuals, participated in the study. We recorded auditory brainstem response (ABR) and speech-evoked frequency following response (sFFR) from the participants. SiN perception was also assessed using the Matrix test. RESULTS Our results revealed a significant prolongation of the O peak, which encodes sound offset in sFFR, for the tinnitus group (p < 0.01). The greater non-stimulus-evoked activity was also found in individuals with tinnitus (p < 0.01). In ABR, the tinnitus group showed reduced wave I amplitude and prolonged absolute wave I, III, and V latencies (p ≤ 0.02). Our findings suggested that individuals with tinnitus had poorer SiN perception compared to normal participants (p < 0.05). CONCLUSION The deficit in encoding sound offset may indicate an impaired inhibitory mechanism in tinnitus. The greater non-stimulus-evoked activity observed in the tinnitus group suggests increased neural noise at the subcortical level. Additionally, individuals with tinnitus may experience speech-in-noise deficits despite having a normal audiogram. Taken together, these findings suggest that the lack of inhibition and increased neural noise may be associated with tinnitus perception.
Collapse
Affiliation(s)
- Hasan Colak
- Biosciences Institute, Newcastle University, Newcastle Upon Tyne, UK.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
14
|
Hey M, Kogel K, Dambon J, Mewes A, Jürgens T, Hocke T. Factors to Describe the Outcome Characteristics of a CI Recipient. J Clin Med 2024; 13:4436. [PMID: 39124703 PMCID: PMC11313646 DOI: 10.3390/jcm13154436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 07/15/2024] [Accepted: 07/18/2024] [Indexed: 08/12/2024] Open
Abstract
Background: In cochlear implant (CI) treatment, there is a large variability in outcome. The aim of our study was to identify the independent audiometric measures that are most directly relevant for describing this variability in outcome characteristics of CI recipients. An extended audiometric test battery was used with selected adult patients in order to characterize the full range of CI outcomes. Methods: CI users were recruited for this study on the basis of their postoperative results and divided into three groups: low (1st quartile), moderate (medium decentile), and high hearing performance (4th quartile). Speech recognition was measured in quiet by using (i) monosyllabic words (40-80 dB SPL), (ii) speech reception threshold (SRT) for numbers, and (iii) the German matrix test in noise. In order to reconstruct demanding everyday listening situations in the clinic, the temporal characteristics of the background noise and the spatial arrangements of the signal sources were varied for tests in noise. In addition, a survey was conducted using the Speech, Spatial, and Qualities (SSQ) questionnaire and the Listening Effort (LE) questionnaire. Results: Fifteen subjects per group were examined (total N = 45), who did not differ significantly in terms of age, time after CI surgery, or CI use behavior. The groups differed mainly in the results of speech audiometry. For speech recognition, significant differences were found between the three groups for the monosyllabic tests in quiet and for the sentences in stationary (S0°N0°) and fluctuating (S0°NCI) noise. Word comprehension and sentence comprehension in quiet were both strongly correlated with the SRT in noise. This observation was also confirmed by a factor analysis. No significant differences were found between the three groups for the SSQ questionnaire and the LE questionnaire results. The results of the factor analysis indicate that speech recognition in noise provides information highly comparable to information from speech intelligibility in quiet. Conclusions: The factor analysis highlighted three components describing the postoperative outcome of CI patients. These were (i) the audiometrically measured supra-threshold speech recognition and (ii) near-threshold audibility, as well as (iii) the subjective assessment of the relationship to real life as determined by the questionnaires. These parameters appear well suited to setting up a framework for a test battery to assess CI outcomes.
Collapse
Affiliation(s)
- Matthias Hey
- ENT Clinic, UKSH Kiel, 24105 Kiel, Germany; (K.K.); (J.D.); (A.M.)
| | - Kevyn Kogel
- ENT Clinic, UKSH Kiel, 24105 Kiel, Germany; (K.K.); (J.D.); (A.M.)
| | - Jan Dambon
- ENT Clinic, UKSH Kiel, 24105 Kiel, Germany; (K.K.); (J.D.); (A.M.)
| | - Alexander Mewes
- ENT Clinic, UKSH Kiel, 24105 Kiel, Germany; (K.K.); (J.D.); (A.M.)
| | - Tim Jürgens
- Institute of Acoustics, University of Applied Sciences Lübeck, 23562 Lübeck, Germany;
| | | |
Collapse
|
15
|
Buth S, Baljić I, Mewes A, Hey M. [Speech discrimination with separated signal sources and sound localization with speech stimuli : Learning effects and reproducibility]. HNO 2024; 72:504-514. [PMID: 38536465 PMCID: PMC11192817 DOI: 10.1007/s00106-024-01426-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2023] [Indexed: 06/22/2024]
Abstract
BACKGROUND Binaural hearing enables better speech comprehension in noisy environments and is necessary for acoustic spatial orientation. This study investigates speech discrimination in noise with separated signal sources and measures sound localization. The aim was to study characteristics and reproducibility of two selected measurement techniques which seem to be suitable for description of the aforementioned aspects of binaural hearing. MATERIALS AND METHODS Speech reception thresholds (SRT) in noise and test-retest reliability were collected from 55 normal-hearing adults for a spatial setup of loudspeakers with angles of ± 45° and ± 90° using the Oldenburg sentence test. The investigations of sound localization were conducted in a semicircle and fullcircle setup (7 and 12 equidistant loudspeakers). RESULTS SRT (S-45N45: -14.1 dB SNR; S45N-45: -16.4 dB SNR; S0N90: -13.1 dB SNR; S0N-90: -13.4 dB SNR) and test-retest reliability (4 to 6 dB SNR) were collected for speech intelligibility in noise with separated signals. The procedural learning effect for this setup could only be mitigated with 120 training sentences. Significantly smaller SRT values, resulting in better speech discrimination, were found for the test situation of the right compared to the left ear. RMS values could be gathered for sound localization in the semicircle (1,9°) as well as in the fullcircle setup (11,1°). Better results were obtained in the retest of the fullcircle setup. CONCLUSION When using the Oldenburg sentence test in noise with spatially separated signals, it is mandatory to perform a training session of 120 sentences in order to minimize the procedural learning effect. Ear-specific SRT values for speech discrimination in noise with separated signal sources are required, which is probably due to the right-ear advantage. A training is recommended for sound localization in the fullcircle setup.
Collapse
Affiliation(s)
- Svenja Buth
- Medizinische Fakultät, Christian-Albrechts-Universität zu Kiel, Kiel, Deutschland.
- HNO-Klinik, Audiologie, Campus Kiel, Universitätsklinikum Schleswig-Holstein, Arnold-Heller-Str. 3, Haus B1, 24105, Kiel, Deutschland.
| | - Izet Baljić
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Audiologisches Zentrum, Helios Klinikum Erfurt, Erfurt, Deutschland
| | - Alexander Mewes
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Kopf- und Halschirurgie, Audiologie, UKSH, Kiel, Deutschland
| | - Matthias Hey
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Kopf- und Halschirurgie, Audiologie, UKSH, Kiel, Deutschland
| |
Collapse
|
16
|
Schirmer J, Wolpert S, Dapper K, Rühle M, Wertz J, Wouters M, Eldh T, Bader K, Singer W, Gaudrain E, Başkent D, Verhulst S, Braun C, Rüttiger L, Munk MHJ, Dalhoff E, Knipper M. Neural Adaptation at Stimulus Onset and Speed of Neural Processing as Critical Contributors to Speech Comprehension Independent of Hearing Threshold or Age. J Clin Med 2024; 13:2725. [PMID: 38731254 PMCID: PMC11084258 DOI: 10.3390/jcm13092725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2024] [Revised: 04/24/2024] [Accepted: 04/26/2024] [Indexed: 05/13/2024] Open
Abstract
Background: It is assumed that speech comprehension deficits in background noise are caused by age-related or acquired hearing loss. Methods: We examined young, middle-aged, and older individuals with and without hearing threshold loss using pure-tone (PT) audiometry, short-pulsed distortion-product otoacoustic emissions (pDPOAEs), auditory brainstem responses (ABRs), auditory steady-state responses (ASSRs), speech comprehension (OLSA), and syllable discrimination in quiet and noise. Results: A noticeable decline of hearing sensitivity in extended high-frequency regions and its influence on low-frequency-induced ABRs was striking. When testing for differences in OLSA thresholds normalized for PT thresholds (PTTs), marked differences in speech comprehension ability exist not only in noise, but also in quiet, and they exist throughout the whole age range investigated. Listeners with poor speech comprehension in quiet exhibited a relatively lower pDPOAE and, thus, cochlear amplifier performance independent of PTT, smaller and delayed ABRs, and lower performance in vowel-phoneme discrimination below phase-locking limits (/o/-/u/). When OLSA was tested in noise, listeners with poor speech comprehension independent of PTT had larger pDPOAEs and, thus, cochlear amplifier performance, larger ASSR amplitudes, and higher uncomfortable loudness levels, all linked with lower performance of vowel-phoneme discrimination above the phase-locking limit (/i/-/y/). Conslusions: This study indicates that listening in noise in humans has a sizable disadvantage in envelope coding when basilar-membrane compression is compromised. Clearly, and in contrast to previous assumptions, both good and poor speech comprehension can exist independently of differences in PTTs and age, a phenomenon that urgently requires improved techniques to diagnose sound processing at stimulus onset in the clinical routine.
Collapse
Affiliation(s)
- Jakob Schirmer
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Stephan Wolpert
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Konrad Dapper
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
- Department of Biology, Technical University Darmstadt, 64287 Darmstadt, Germany
| | - Moritz Rühle
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Jakob Wertz
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Marjoleen Wouters
- Department of Information Technology, Ghent University, Technologiepark 126, 9052 Zwijnaarde, Belgium; (M.W.); (S.V.)
| | - Therese Eldh
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Katharina Bader
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Wibke Singer
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Etienne Gaudrain
- Lyon Neuroscience Research Center, Centre National de la Recherche Scientifique UMR5292, Inserm U1028, Université Lyon 1, Centre Hospitalier Le Vinatier-Bâtiment 462–Neurocampus, 95 Boulevard Pinel, 69675 Bron CEDEX, France;
- Department of Otorhinolaryngology, University Medical Center Groningen (UMCG), Hanzeplein 1, BB21, 9700 RB Groningen, The Netherlands;
| | - Deniz Başkent
- Department of Otorhinolaryngology, University Medical Center Groningen (UMCG), Hanzeplein 1, BB21, 9700 RB Groningen, The Netherlands;
| | - Sarah Verhulst
- Department of Information Technology, Ghent University, Technologiepark 126, 9052 Zwijnaarde, Belgium; (M.W.); (S.V.)
| | - Christoph Braun
- Magnetoencephalography-Centre and Hertie Institute for Clinical Brain Research, University of Tübingen, Otfried-Müller-Straße 27, 72076 Tübingen, Germany;
- Center for Mind and Brain Research, University of Trento, Palazzo Fedrigotti-corso Bettini 31, 38068 Rovereto, Italy
| | - Lukas Rüttiger
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Matthias H. J. Munk
- Department of Biology, Technical University Darmstadt, 64287 Darmstadt, Germany
- Department of Psychiatry & Psychotherapy, University of Tübingen, Calwerstraße 14, 72076 Tübingen, Germany
| | - Ernst Dalhoff
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, University of Tübingen, Elfriede-Aulhorn-Str. 5, 72076 Tübingen, Germany; (J.S.); (S.W.); (K.D.); (M.R.); (J.W.); (T.E.); (K.B.); (W.S.); (L.R.)
| |
Collapse
|
17
|
O'Brien K, Hackenberg B, Döge J, Bohnert A, Rader T, Lackner KJ, Beutel ME, Münzel T, Wild PS, Chalabi J, Schuster AK, Schmidtmann I, Matthias C, Bahr-Hamm K. Age standardization and time-of-day performance for the Oldenburg Sentence Test (OLSA): results from the population-based Gutenberg Health Study. Eur Arch Otorhinolaryngol 2024; 281:2341-2351. [PMID: 38110748 PMCID: PMC11023958 DOI: 10.1007/s00405-023-08358-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2022] [Accepted: 11/15/2023] [Indexed: 12/20/2023]
Abstract
PURPOSE The Oldenburg Sentence Test (OLSA) is a German matrix test designed to determine speech recognition thresholds (SRT). It is widely used for hearing-aids and cochlear implant fitting, but an age-adjusted standard is still lacking. In addition, knowing that the ability to concentrate is an important factor in OLSA performance, we hypothesized that OLSA performance would depend on the time of day it was administered. The aim of this study was to propose an age standardization for the OLSA and to determine its diurnal performance. METHODS The Gutenberg Health Study is an ongoing population-based study and designed as a single-centre observational, prospective cohort study. Participants were interviewed about common otologic symptoms and tested with pure-tone audiometry and OLSA. Two groups-subjects with and without hearing loss-were established. The OLSA was performed in two runs. The SRT was evaluated for each participant. Results were characterized by age in 5-year cohorts, gender and speech recognition threshold (SRT). A time stamp with an hourly interval was also implemented. RESULTS The mean OLSA SRT was - 6.9 ± 1.0 dB (group 1 male) and - 7.1 ± 0.8 dB (group 1 female) showing an inverse relationship with age in the whole cohort, whereas a linear increase was observed in those without hearing loss. OLSA-SRT values increased more in males than in females with increasing age. No statistical significance was found for the diurnal performance. CONCLUSIONS A study with 2900 evaluable Oldenburg Sentence Tests is a novelty and representative for the population of Mainz and its surroundings. We postulate an age- and gender-standardized scale for the evaluation of the OLSA. In fact, with an intergroup standard deviation (of about 1.5 dB) compared to the age dependence of 0.7 dB/10 years, this age normalization should be considered as clinically relevant.
Collapse
Affiliation(s)
- Karoline O'Brien
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Berit Hackenberg
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Julia Döge
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Andrea Bohnert
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Tobias Rader
- Division of Audiology, Department of Otolaryngology, University Hospital, Ludwig-Maximillian-University, Munich, Germany
| | - Karl J Lackner
- Institute for Clinical Chemistry and Laboratory Medicine, University Medical Center Mainz, Mainz, Germany
| | - Manfred E Beutel
- Department of Psychosomatic Medicine and Psychotherapy, University Medical Center Mainz, Mainz, Germany
| | - Thomas Münzel
- Department of Cardiology-Cardiology I, University Medical Center Mainz, Mainz, Germany
- German Center for Cardiovascular Research (DZHK), Partner Site RhineMine, Mainz, Germany
| | - Philipp S Wild
- Preventive Cardiology and Preventive Medicine, Department of Cardiology, University Medical Center Mainz, Mainz, Germany
- Center for Thrombosis and Hemostasis (CTH), University Medical Center Mainz, Mainz, Germany
- German Center for Cardiovascular Research (DZHK), Partner Site RhineMine, Mainz, Germany
| | - Julian Chalabi
- Preventive Cardiology and Preventive Medicine, Department of Cardiology, University Medical Center Mainz, Mainz, Germany
| | | | - Irene Schmidtmann
- Institute of Medical Biostatistics, Epidemiology and Informatics, University Medical Center Mainz, Mainz, Germany
| | - Christoph Matthias
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany
| | - Katharina Bahr-Hamm
- Department of Otorhinolaryngology, University Medical Center Mainz, Langenbeckstrasse 1, 55131, Mainz, Germany.
| |
Collapse
|
18
|
Choi HJ, Kyong JS, Lee JH, Han SH, Shim HJ. The Impact of Spectral and Temporal Degradation on Vocoded Speech Recognition in Early-Blind Individuals. eNeuro 2024; 11:ENEURO.0528-23.2024. [PMID: 38811162 PMCID: PMC11137809 DOI: 10.1523/eneuro.0528-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 04/11/2024] [Accepted: 05/01/2024] [Indexed: 05/31/2024] Open
Abstract
This study compared the impact of spectral and temporal degradation on vocoded speech recognition between early-blind and sighted subjects. The participants included 25 early-blind subjects (30.32 ± 4.88 years; male:female, 14:11) and 25 age- and sex-matched sighted subjects. Tests included monosyllable recognition in noise at various signal-to-noise ratios (-18 to -4 dB), matrix sentence-in-noise recognition, and vocoded speech recognition with different numbers of channels (4, 8, 16, and 32) and temporal envelope cutoff frequencies (50 vs 500 Hz). Cortical-evoked potentials (N2 and P3b) were measured in response to spectrally and temporally degraded stimuli. The early-blind subjects displayed superior monosyllable and sentence recognition than sighted subjects (all p < 0.01). In the vocoded speech recognition test, a three-way repeated-measure analysis of variance (two groups × four channels × two cutoff frequencies) revealed significant main effects of group, channel, and cutoff frequency (all p < 0.001). Early-blind subjects showed increased sensitivity to spectral degradation for speech recognition, evident in the significant interaction between group and channel (p = 0.007). N2 responses in early-blind subjects exhibited shorter latency and greater amplitude in the 8-channel (p = 0.022 and 0.034, respectively) and shorter latency in the 16-channel (p = 0.049) compared with sighted subjects. In conclusion, early-blind subjects demonstrated speech recognition advantages over sighted subjects, even in the presence of spectral and temporal degradation. Spectral degradation had a greater impact on speech recognition in early-blind subjects, while the effect of temporal degradation was similar in both groups.
Collapse
Affiliation(s)
- Hyo Jung Choi
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
- Eulji Tinnitus and Hearing Research Institute, Nowon Eulji Medical Center, Seoul 01830, Republic of Korea
| | - Jeong-Sug Kyong
- Sensory Organ Institute, Medical Research Institute, Seoul National University, Seoul 03080, Republic of Korea
- Department of Radiology, Konkuk University Medical Center, Seoul 05030, Republic of Korea
| | - Jae Hee Lee
- Department of Audiology and Speech-Language Pathology, Hallym University of Graduate Studies, Seoul 06197, Republic of Korea
| | - Seung Ho Han
- Department of Physiology and Biophysics, School of Medicine, Eulji University, Daejeon 34824, Republic of Korea
| | - Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University School of Medicine, Seoul 01830, Republic of Korea
- Eulji Tinnitus and Hearing Research Institute, Nowon Eulji Medical Center, Seoul 01830, Republic of Korea
| |
Collapse
|
19
|
Yao D, Zhao J, Wang L, Shang Z, Gu J, Wang Y, Jia M, Li J. Effects of spatial configuration and fundamental frequency on speech intelligibility in multiple-talker conditions in the ipsilateral horizontal plane and median planea). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2934-2947. [PMID: 38717201 DOI: 10.1121/10.0025857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2024] [Accepted: 04/11/2024] [Indexed: 09/20/2024]
Abstract
Spatial separation and fundamental frequency (F0) separation are effective cues for improving the intelligibility of target speech in multi-talker scenarios. Previous studies predominantly focused on spatial configurations within the frontal hemifield, overlooking the ipsilateral side and the entire median plane, where localization confusion often occurs. This study investigated the impact of spatial and F0 separation on intelligibility under the above-mentioned underexplored spatial configurations. The speech reception thresholds were measured through three experiments for scenarios involving two to four talkers, either in the ipsilateral horizontal plane or in the entire median plane, utilizing monotonized speech with varying F0s as stimuli. The results revealed that spatial separation in symmetrical positions (front-back symmetry in the ipsilateral horizontal plane or front-back, up-down symmetry in the median plane) contributes positively to intelligibility. Both target direction and relative target-masker separation influence the masking release attributed to spatial separation. As the number of talkers exceeds two, the masking release from spatial separation diminishes. Nevertheless, F0 separation remains as a remarkably effective cue and could even facilitate spatial separation in improving intelligibility. Further analysis indicated that current intelligibility models encounter difficulties in accurately predicting intelligibility in scenarios explored in this study.
Collapse
Affiliation(s)
- Dingding Yao
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jiale Zhao
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Linyi Wang
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Zengqiang Shang
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Jianjun Gu
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Yunan Wang
- Department of Electronic and Information Engineering, Beihang University, Beijing 100191, China
| | - Maoshen Jia
- Faculty of Information Technology, Beijing University of Technology, Beijing 100124, China
| | - Junfeng Li
- Key Laboratory of Speech Acoustics and Content Understanding, Institute of Acoustics, Chinese Academy of Sciences, Beijing 100190, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
20
|
Flament J, De Seta D, Russo FY, Bestel J, Sterkers O, Ferrary E, Nguyen Y, Mosnier I, Torres R. Predicting Matrix Test Effectiveness for Evaluating Auditory Performance in Noise Using Pure-Tone Audiometry and Speech Recognition in Quiet in Cochlear Implant Recipients. Audiol Neurootol 2024; 29:408-417. [PMID: 38527427 DOI: 10.1159/000535622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Accepted: 11/29/2023] [Indexed: 03/27/2024] Open
Abstract
INTRODUCTION Auditory performance in noise of cochlear implant recipients can be assessed with the adaptive Matrix test (MT); however, when the speech-to-noise ratio (SNR) exceeds 15 dB, the background noise has any negative impact on the speech recognition. Here, we aim to evaluate the predictive power of aided pure-tone audiometry and speech recognition in quiet and establish cut-off values for both tests that indicate whether auditory performance in noise can be assessed using the Matrix sentence test in a diffuse noise environment. METHODS Here, we assessed the power of pure-tone audiometry and speech recognition in quiet to predict the response to the MT. Ninety-eight cochlear implant recipients were assessed using different sound processors from Advanced Bionics (n = 56) and CochlearTM (n = 42). Auditory tests were performed at least 1 year after cochlear implantation or upgrading the sound processor to ensure the best benefit of the implant. Auditory assessment of the implanted ear in free-field conditions included: pure-tone average (PTA), speech discrimination score (SDS) in quiet at 65 dB, and speech recognition threshold (SRT) in noise that is the SNR at which the patient can correctly recognize 50% of the words using the MT in a diffuse sound field. RESULTS The SRT in noise was determined in 60 patients (61%) and undetermined in 38 (39%) using the MT. When cut-off values for PTA <36 dB and SDS >41% were used separately, they were able to predict a positive response to the MT in 83% of recipients; using both cut-off values together, the predictive value reached 92%. DISCUSSION As the pure-tone audiometry is standardized universally and the speech recognition in quiet could vary depending on the language used; we propose that the MT should be performed in recipients with PTA <36 dB, and in recipients with PTA >36 dB, a list of Matrix sentences at a fixed SNR should be presented to determine the percentage of words understood. This approach should enable clinicians to obtain information about auditory performance in noise whenever possible.
Collapse
Affiliation(s)
- Jonathan Flament
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Centre Audition LEA Audika, Paris, France
| | - Daniele De Seta
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Unit of Otolaryngology, San Giovanni-Addolorata Hospital, Rome, Italy
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
| | - Francesca Yoshie Russo
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | | | - Olivier Sterkers
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
| | - Evelyne Ferrary
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
| | - Yann Nguyen
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
| | - Isabelle Mosnier
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
| | - Renato Torres
- Unité Fonctionnelle Implants Auditifs, Service Oto-Rhino-Laryngologie, GHU Pitié-Salpêtrière, AP-HP/ Sorbonne Université, Paris, France
- Technologies et thérapie génique pour la surdité, Institut de l'Audition, Université Paris Cité/Inserm/Institut Pasteur, Paris, France
- Departamento de Ciencias Fisiológicas, Facultad de Medicina, Universidad Nacional de San Agustín de Arequipa, Arequipa, Peru
| |
Collapse
|
21
|
Wohlbauer DM, Lai WK, Dillier N. InterlACE Sound Coding for Unilateral and Bilateral Cochlear Implants. IEEE Trans Biomed Eng 2024; 71:904-915. [PMID: 37796675 DOI: 10.1109/tbme.2023.3322348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/07/2023]
Abstract
OBJECTIVE Cochlear implant signal processing strategies define the rules of how acoustic signals are converted into electrical stimulation patterns. Technological and anatomical limitations, however, impose constraints on the signal transmission and the accurate excitation of the auditory nerve. Acoustic signals are degraded throughout cochlear implant processing, and electrical signal interactions at the electrode-neuron interface constrain spectral and temporal precision. In this work, we propose a novel InterlACE signal processing strategy to counteract the occurring limitations. METHODS By replacing the maxima selection of the Advanced Combination Encoder strategy with a method that defines spatially and temporally alternating channels, InterlACE can compensate for discarded signal content of the conventional processing. The strategy can be extended bilaterally by introducing synchronized timing and channel selection. InterlACE was explored unilaterally and bilaterally by assessing speech intelligibility and spectral resolution. Five experienced bilaterally implanted cochlear implant recipients participated in the Oldenburg Sentence Recognition Test in background noise and the spectral ripple discrimination task. RESULTS The introduced alternating channel selection methodology shows promising outcomes for speech intelligibility but could not indicate better spectral ripple discrimination. CONCLUSION InterlACE processing positively affects speech intelligibility, increases available unilateral and bilateral signal content, and may potentially counteract signal interactions at the electrode-neuron interface. SIGNIFICANCE This work shows how cochlear implant channel selection can be modified and extended bilaterally. The clinical impact of the modifications needs to be explored with a larger sample size.
Collapse
|
22
|
Le Rhun L, Llorach G, Delmas T, Suied C, Arnal LH, Lazard DS. A standardised test to evaluate audio-visual speech intelligibility in French. Heliyon 2024; 10:e24750. [PMID: 38312568 PMCID: PMC10835303 DOI: 10.1016/j.heliyon.2024.e24750] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 12/07/2023] [Accepted: 01/12/2024] [Indexed: 02/06/2024] Open
Abstract
Objective Lipreading, which plays a major role in the communication of the hearing impaired, lacked a French standardised tool. Our aim was to create and validate an audio-visual (AV) version of the French Matrix Sentence Test (FrMST). Design Video recordings were created by dubbing the existing audio files. Sample Thirty-five young, normal-hearing participants were tested in auditory and visual modalities alone (Ao, Vo) and in AV conditions, in quiet, noise, and open and closed-set response formats. Results Lipreading ability (Vo) ranged from 1 % to 77%-word comprehension. The absolute AV benefit was 9.25 dB SPL in quiet and 4.6 dB SNR in noise. The response format did not influence the results in the AV noise condition, except during the training phase. Lipreading ability and AV benefit were significantly correlated. Conclusions The French video material achieved similar AV benefits as those described in the literature for AV MST in other languages. For clinical purposes, we suggest targeting SRT80 to avoid ceiling effects, and performing two training lists in the AV condition in noise, followed by one AV list in noise, one Ao list in noise and one Vo list, in a randomised order, in open or close set-format.
Collapse
Affiliation(s)
- Loïc Le Rhun
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
| | - Gerard Llorach
- Auditory Signal Processing, Dept. of Medical Physics and Acoustics, University of Oldenburg Oldenburg, Germany
| | - Tanguy Delmas
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
- ECLEAR, Audition Lefeuvre – Audition Marc Boulet, Athis-Mons, France
| | - Clara Suied
- Institut de Recherche Biomédicale des Armées, Département Neurosciences et Sciences Cognitives, Brétigny-sur-Orge, France
| | - Luc H. Arnal
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
| | - Diane S. Lazard
- Institut Pasteur, Université Paris Cité, Inserm UA06, Institut de l’Audition, Paris, France
- Princess Grace Hospital, ENT & Maxillo-facial Surgery Department, Monaco
- Institut Arthur Vernes, ENT Surgery Department, Paris, France
| |
Collapse
|
23
|
Çolak H, Aydemir BE, Sakarya MD, Çakmak E, Alniaçik A, Türkyilmaz MD. Subcortical Auditory Processing and Speech Perception in Noise Among Individuals With and Without Extended High-Frequency Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:221-231. [PMID: 37956878 DOI: 10.1044/2023_jslhr-23-00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
PURPOSE The significance of extended high-frequency (EHF) hearing (> 8 kHz) is not well understood so far. In this study, we aimed to understand the relationship between EHF hearing loss (EHFHL) and speech perception in noise (SPIN) and the associated physiological signatures using the speech-evoked frequency-following response (sFFR). METHOD Sixteen young adults with EHFHL and 16 age- and sex-matched individuals with normal hearing participated in the study. SPIN performance in right speech-right noise, left speech-left noise, and binaural listening conditions was evaluated using the Turkish Matrix Test. Additionally, subcortical auditory processing was assessed by recording sFFRs elicited by 40-ms /da/ stimuli. RESULTS Individuals with EHFHL demonstrated poorer SPIN performances in all listening conditions (p < .01). Longer latencies were observed in the V (onset) and O (offset) peaks in these individuals (p ≤ .01). However, only the V/A peak amplitude was found to be significantly reduced in individuals with EHFHL (p < .01). CONCLUSIONS Our findings highlight the importance of EHF hearing and suggest that EHF hearing should be considered among the key elements in SPIN. Individuals with EHFHL show a tendency toward weaker subcortical auditory processing, which likely contributes to their poorer SPIN performance. Thus, routine assessment of EHF hearing should be implemented in clinical settings, alongside the evaluation of standard audiometric frequencies (0.25-8 kHz).
Collapse
Affiliation(s)
- Hasan Çolak
- Department of Audiology, Baskent University, Ankara, Turkey
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | | | | - Eda Çakmak
- Department of Audiology, Baskent University, Ankara, Turkey
| | | | | |
Collapse
|
24
|
Holube I, Taesler S, Ibelings S, Hansen M, Ooster J. Automated Measurement of Speech Recognition, Reaction Time, and Speech Rate and Their Relation to Self-Reported Listening Effort for Normal-Hearing and Hearing-Impaired Listeners Using various Maskers. Trends Hear 2024; 28:23312165241276435. [PMID: 39311635 PMCID: PMC11421406 DOI: 10.1177/23312165241276435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 07/11/2024] [Accepted: 08/03/2024] [Indexed: 09/26/2024] Open
Abstract
In speech audiometry, the speech-recognition threshold (SRT) is usually established by adjusting the signal-to-noise ratio (SNR) until 50% of the words or sentences are repeated correctly. However, these conditions are rarely encountered in everyday situations. Therefore, for a group of 15 young participants with normal hearing and a group of 12 older participants with hearing impairment, speech-recognition scores were determined at SRT and at four higher SNRs using several stationary and fluctuating maskers. Participants' verbal responses were recorded, and participants were asked to self-report their listening effort on a categorical scale (self-reported listening effort, SR-LE). The responses were analyzed using an Automatic Speech Recognizer (ASR) and compared to the results of a human examiner. An intraclass correlation coefficient of r = .993 for the agreement between their corresponding speech-recognition scores was observed. As expected, speech-recognition scores increased with increasing SNR and decreased with increasing SR-LE. However, differences between speech-recognition scores for fluctuating and stationary maskers were observed as a function of SNR, but not as a function of SR-LE. The verbal response time (VRT) and the response speech rate (RSR) of the listeners' responses were measured using an ASR. The participants with hearing impairment showed significantly lower RSRs and higher VRTs compared to the participants with normal hearing. These differences may be attributed to differences in age, hearing, or both. With increasing SR-LE, VRT increased and RSR decreased. The results show the possibility of deriving a behavioral measure, VRT, measured directly from participants' verbal responses during speech audiometry, as a proxy for SR-LE.
Collapse
Affiliation(s)
- Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Stefan Taesler
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | - Saskia Ibelings
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Martin Hansen
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
| | - Jasper Ooster
- Cluster of Excellence Hearing4all, Oldenburg, Germany
- Communication Acoustics, Carl von Ossietzky University, Oldenburg, Germany
| |
Collapse
|
25
|
Araiza-Illan G, Meyer L, Truong KP, Başkent D. Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition? Trends Hear 2024; 28:23312165241229057. [PMID: 38483979 PMCID: PMC10943752 DOI: 10.1177/23312165241229057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 01/05/2024] [Accepted: 01/11/2024] [Indexed: 03/18/2024] Open
Abstract
A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.
Collapse
Affiliation(s)
- Gloria Araiza-Illan
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Luke Meyer
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Khiet P. Truong
- Human Media Interaction, University of Twente, Enschede, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
26
|
Hu H, Hochmuth S, Man CK, Warzybok A, Kollmeier B, Wong LLN. Development and evaluation of the Cantonese matrix sentence test. Int J Audiol 2024; 63:8-20. [PMID: 36441177 DOI: 10.1080/14992027.2022.2142683] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Accepted: 10/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop the Cantonese matrix (YUEmatrix) test according to the international standard procedure and examine possible different outcomes in another tonal language. DESIGN A 50-word Cantonese base-matrix was established. Word-specific speech recognition functions, speech recognition thresholds (SRT), and slopes were obtained. The speech material was homogenised in intelligibility by applying level corrections up to ± 3 dB. Subsequently, the YUEmatrix test was evaluated in five aspects: training effect, test-list equivalence, test-retest reliability, establishment of reference data for normal-hearing Cantonese-speakers, and comparison with the Cantonese-Hearing-In-Noise-Test. STUDY SAMPLE Overall, 64 normal-hearing native Cantonese-speaking listeners. RESULTS SRT measurements with adaptive procedures resulted in a reference SRT of -9.7 ± 0.7 dB SNR for open-set and -11.1 ± 1.2 dB SNR for the closed-set response format. Fixed SNR measurements suggested a test-specific speech intelligibility function slope of 15.5 ± 0.7%/dB. Seventeen 10-sentences base test lists were confirmed to be equivalent with respect to speech intelligibility. Training effect was not observed after two measurements of 20-sentences lists. CONCLUSIONS The YUEmatrix yields comparable results to matrix tests in other languages including Mandarin. Level adjustments to homogenise sentences appear to be less effective for tonal languages than for most other languages developed so far.
Collapse
Affiliation(s)
- Hongmei Hu
- Department of Medical Physics and Acoustics, Medizinische Physik and Cluster of Excellence "Hearing4all", Universität Oldenburg, Oldenburg, Germany
| | - Sabine Hochmuth
- Department of Otolaryngology, Head and Neck Surgery, Universität Oldenburg, Oldenburg, Germany
| | - Chi Kwong Man
- Faculty of Education, University of Hong Kong, Hong Kong, China
| | - Anna Warzybok
- Department of Medical Physics and Acoustics, Medizinische Physik and Cluster of Excellence "Hearing4all", Universität Oldenburg, Oldenburg, Germany
| | - Birger Kollmeier
- Department of Medical Physics and Acoustics, Medizinische Physik and Cluster of Excellence "Hearing4all", Universität Oldenburg, Oldenburg, Germany
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
| | - Lena L N Wong
- Faculty of Education, University of Hong Kong, Hong Kong, China
| |
Collapse
|
27
|
Xu C, Hülsmeier D, Buhl M, Kollmeier B. How Does Inattention Influence the Robustness and Efficiency of Adaptive Procedures in the Context of Psychoacoustic Assessments via Smartphone? Trends Hear 2024; 28:23312165241288051. [PMID: 39558583 PMCID: PMC11574912 DOI: 10.1177/23312165241288051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2024] Open
Abstract
Inattention plays a critical role in the accuracy of threshold measurements, e.g., when using mobile devices. To describe the influence of distraction, long- and short-term inattention models based on either a stationary or a non-stationary psychometric function were developed and used to generate three simulated listeners: fully-, moderately-, and non-concentrated listeners. Six established adaptive procedures were assessed via Monte-Carlo simulations in combination with the inattention models and compared with a newly proposed method: the graded response bracketing procedure (GRaBr). Robustness was examined by bias and root mean square error between the "true" and estimated thresholds while efficiency was evaluated using rates of convergence and a normalized efficiency index. The findings show that inattention has a detrimental impact on adaptive procedure performance-especially for the short-term inattentive listener-and that several model-based procedures relying on a consistent response behavior of the listener are prone to errors owing to inattention. The model-free procedure GRaBr, on the other hand, is considerably robust and efficient in spite of the (assumed) inattention. As a result, adaptive techniques with desired properties (i.e., high robustness and efficiency) as revealed in our simulations-such as GRaBr-appear to be advantageous for mobile devices or in laboratory tests with untrained subjects.
Collapse
Affiliation(s)
- Chen Xu
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| | - David Hülsmeier
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| | - Mareike Buhl
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| | - Birger Kollmeier
- Medizinische Physik and Cluster of Excellence Hearing4all, Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
28
|
Ibelings S, Brand T, Ruigendijk E, Holube I. Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech. Trends Hear 2024; 28:23312165241261490. [PMID: 39051703 PMCID: PMC11273571 DOI: 10.1177/23312165241261490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/16/2024] [Accepted: 05/27/2024] [Indexed: 07/27/2024] Open
Abstract
Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.
Collapse
Affiliation(s)
- Saskia Ibelings
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Thomas Brand
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Esther Ruigendijk
- Cluster of Excellence Hearing4All, Oldenburg, Germany
- Department of Dutch, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
29
|
Abstract
BACKGROUND One of the main treatment goals in cochlear implant (CI) patients is to improve speech perception. One of the target parameters is speech intelligibility in quiet. However, treatment results show a high variability, which has not been sufficiently explained so far. The aim of this noninterventional retrospective study was to elucidate this variability using a selected population of patients in whom etiology was not expected to have a negative impact on postoperative speech intelligibility. MATERIALS AND METHODS Audiometric findings of the CI follow-up of 28 adult patients after 6 months of CI experience were evaluated. These were related to the preoperative audiometric examination and evaluated with respect to a recently published predictive model for the postoperative monosyllabic score. RESULTS Inclusion of postoperative categorical loudness scaling and hearing loss for Freiburg numbers in the model explained 55% of the variability in fitting outcomes with respect to monosyllabic word recognition. CONCLUSION The results of this study suggest that much of the cause of variability in fitting outcomes can be captured by systematic postoperative audiometric checks. Immediate conclusions for CI system fitting adjustments may be drawn from these results. However, the extent to which these are accepted by individual patients and thus lead to an improvement in outcome must be subject of further studies, preferably prospective.
Collapse
Affiliation(s)
- Oliver C Dziemba
- Klinik und Poliklinik für Hals‑, Nasen‑, Ohrenkrankheiten, Kopf- und Halschirurgie, Universitätsmedizin Greifswald, Ferdinand-Sauerbruch-Straße, 17475, Greifswald, Germany.
| | | | - Thomas Hocke
- Cochlear Deutschland GmbH & Co. KG, Hannover, Germany
| |
Collapse
|
30
|
Weißgerber T, Stöver T, Baumann U. Speech perception in modulated noise assessed in bimodal CI users. HNO 2024; 72:10-16. [PMID: 37552279 PMCID: PMC10799124 DOI: 10.1007/s00106-023-01321-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2023] [Indexed: 08/09/2023]
Abstract
BACKGROUND Although good speech perception in quiet is achievable with cochlear implants (CIs), speech perception in noise is severely impaired compared to normal hearing (NH). In the case of a bimodal CI fitting with a hearing aid (HA) in the opposite ear, the amount of residual acoustic hearing influences speech perception in noise. OBJECTIVE The aim of this work was to investigate speech perception in noise in a group of bimodal CI users and compare the results to age-matched HA users and people without subjective hearing loss, as well as with a young NH group. MATERIALS AND METHODS Study participants comprised 19 bimodal CI users, 39 HA users, and 40 subjectively NH subjects in the age group 60-90 years and 14 young NH subjects. Speech reception thresholds (SRTs) in noise were adaptively measured using the Oldenburg Sentence Test for the two spatial test conditions S0N0 (speech and noise from the front) and multisource-noise field (MSNF; speech from the front, four spatially distributed noise sources) in continuous noise of the Oldenburg Sentence Test (Ol-noise) and amplitude-modulated Fastl noise (Fastl-noise). RESULTS With increasing hearing loss, the median SRT worsened significantly in all conditions. In test condition S0N0, the SRT of the CI group was 5.6 dB worse in Ol-noise than in the young NH group (mean age 26.4 years) and 22.5 dB worse in Fastl-noise; in MSNF, the differences were 6.6 dB (Ol-noise) and 17.3 dB (Fastl-noise), respectively. In the young NH group, median SRT in condition S0N0 improved by 11 dB due to gap listening; in the older NH group, SRTs improved by only 3.1 dB. In the HA and bimodal CI groups there was no gap listening effect and SRTs in Fastl-noise were even worse than in Ol-noise. CONCLUSION With increasing hearing loss, speech perception in modulated noise is even more impaired than in continuous noise.
Collapse
Affiliation(s)
- Tobias Weißgerber
- Audiological Acoustics, Department of Otolaryngology, University Hospital Frankfurt, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt am Main, Germany.
| | - Timo Stöver
- Department of Otorhinolaryngology, University Hospital Frankfurt, Goethe University Frankfurt, Frankfurt am Main, Germany
| | - Uwe Baumann
- Audiological Acoustics, Department of Otolaryngology, University Hospital Frankfurt, Goethe University Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt am Main, Germany
| |
Collapse
|
31
|
Blümer M, Heeren J, Mirkovic B, Latzel M, Gordon C, Crowhen D, Meis M, Wagener K, Schulte M. The Impact of Hearing Aids on Listening Effort and Listening-Related Fatigue - Investigations in a Virtual Realistic Listening Environment. Trends Hear 2024; 28:23312165241265199. [PMID: 39095047 PMCID: PMC11378347 DOI: 10.1177/23312165241265199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024] Open
Abstract
Participation in complex listening situations such as group conversations in noisy environments sets high demands on the auditory system and on cognitive processing. Reports of hearing-impaired people indicate that strenuous listening situations occurring throughout the day lead to feelings of fatigue at the end of the day. The aim of the present study was to develop a suitable test sequence to evoke and measure listening effort (LE) and listening-related fatigue (LRF), and, to evaluate the influence of hearing aid use on both dimensions in mild to moderately hearing-impaired participants. The chosen approach aims to reconstruct a representative acoustic day (Time Compressed Acoustic Day [TCAD]) by means of an eight-part hearing-test sequence with a total duration of approximately 2½ h. For this purpose, the hearing test sequence combined four different listening tasks with five different acoustic scenarios and was presented to the 20 test subjects using virtual acoustics in an open field measurement in aided and unaided conditions. Besides subjective ratings of LE and LRF, behavioral measures (response accuracy, reaction times), and an attention test (d2-R) were performed prior to and after the TCAD. Furthermore, stress hormones were evaluated by taking salivary samples. Subjective ratings of LRF increased throughout the test sequence. This effect was observed to be higher when testing unaided. In three of the eight listening tests, the aided condition led to significantly faster reaction times/response accuracies than in the unaided condition. In the d2-R test, an interaction in processing speed between time (pre- vs. post-TCAD) and provision (unaided vs. aided) was found suggesting an influence of hearing aid provision on LRF. A comparison of the averaged subjective ratings at the beginning and end of the TCAD shows a significant increase in LRF for both conditions. At the end of the TCAD, subjective fatigue was significantly lower when wearing hearing aids. The analysis of stress hormones did not reveal significant effects.
Collapse
Affiliation(s)
- M Blümer
- Department of Otorhinolaryngology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| | - J Heeren
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - B Mirkovic
- Department of Psychology, University of Oldenburg School VI-Medicine and Health Sciences, Oldenburg, Germany
| | - M Latzel
- Sonova Holding AG, Stäfa, Switzerland
| | - C Gordon
- Sonova New Zealand, Auckland, New Zealand
| | - D Crowhen
- Sonova New Zealand, Auckland, New Zealand
| | - M Meis
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - K Wagener
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - M Schulte
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
32
|
De Poortere N, Verhulst S, Degeest S, Keshishzadeh S, Dhooge I, Keppler H. Evaluation of Lifetime Noise Exposure History Reporting. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:5129-5151. [PMID: 37988687 DOI: 10.1044/2023_jslhr-23-00266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
PURPOSE The purpose of this study is to critically evaluate lifetime noise exposure history (LNEH) reporting. First, two different approaches to evaluate the cumulative LNEH were compared. Second, individual LNEH was associated with the subjects' hearing status. Third, loudness estimates of exposure activities, by means of Jokitulppo- and Ferguson-based exposure levels, were compared with dosimeter sound-level measurements. METHOD One hundred one young adults completed the questionnaires, and a subgroup of 30 subjects underwent audiological assessment. Pure-tone audiometry, speech-in-noise intelligibility, distortion product otoacoustic emissions, auditory brainstem responses, and envelope following responses were included. Fifteen out of the 30 subjects took part in a noisy activity while wearing a dosimeter. RESULTS First, results demonstrate that the structured questionnaire yielded a greater amount of information pertaining to the diverse activities, surpassing the insights obtained from an open-ended questionnaire. Second, no significant correlations between audiological assessment and LNEH were found. Lastly, the results indicate that Ferguson-based exposure levels offer a more precise estimation of the actual exposure levels, in contrast to Jokitulppo-based estimates. CONCLUSIONS We propose several recommendations for determining the LNEH. First, it is vital to define accurate loudness categories and corresponding allocated levels, with a preference for the loudness levels proposed by Ferguson et al. (2019), as identified in this study. Second, a structured questionnaire regarding LNEH is recommended, discouraging open-ended questioning. Third, it is essential to include a separate category exclusively addressing work-related activities, encompassing various activities for more accurate surveying.
Collapse
Affiliation(s)
- Nele De Poortere
- Department of Rehabilitation Sciences-Audiology, Ghent University, Belgium
| | - Sarah Verhulst
- Department of Information Technology-Hearing Technology at WAVES, Ghent University, Belgium
| | - Sofie Degeest
- Department of Rehabilitation Sciences-Audiology, Ghent University, Belgium
| | - Sarineh Keshishzadeh
- Department of Information Technology-Hearing Technology at WAVES, Ghent University, Belgium
| | - Ingeborg Dhooge
- Department of Ear, Nose and Throat, Ghent University Hospital, Belgium
- Department of Head and Skin, Ghent University, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences-Audiology, Ghent University, Belgium
- Department of Head and Skin, Ghent University, Belgium
| |
Collapse
|
33
|
Koprowska A, Marozeau J, Dau T, Serman M. The effect of phoneme-based auditory training on speech intelligibility in hearing-aid users. Int J Audiol 2023; 62:1048-1058. [PMID: 36301675 DOI: 10.1080/14992027.2022.2135032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Accepted: 10/04/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Hearing loss commonly causes difficulties in understanding speech in the presence of background noise. The benefits of hearing-aids in terms of speech intelligibility in challenging listening scenarios remain limited. The present study investigated if phoneme-in-noise discrimination training improves phoneme identification and sentence intelligibility in noise in hearing-aid users. DESIGN Two groups of participants received either a two-week training program or a control intervention. Three phoneme categories were trained: onset consonants (C1), vowels (V) and post-vowel consonants (C2) in C1-V-C2-/i/ logatomes from the Danish nonsense word corpus (DANOK). Phoneme identification test and hearing in noise test (HINT) were administered before and after the respective interventions and, for the training group only, after three months. STUDY SAMPLE Twenty 63-to-79 years old individuals with a mild-to-moderate sensorineural hearing loss and at least one year of experience using hearing-aids. RESULTS The training provided an improvement in phoneme identification scores for vowels and post-vowel consonants, which was retained over three months. No significant performance improvement in HINT was found. CONCLUSION The study demonstrates that the training induced a robust refinement of auditory perception at a phoneme level but provides no evidence for the generalisation to an untrained sentence intelligibility task.
Collapse
Affiliation(s)
- Aleksandra Koprowska
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
- Copenhagen Hearing and Balance Center, Rigshospitalet, Copenhagen, Denmark
| | - Jeremy Marozeau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
- Copenhagen Hearing and Balance Center, Rigshospitalet, Copenhagen, Denmark
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
- Copenhagen Hearing and Balance Center, Rigshospitalet, Copenhagen, Denmark
| | | |
Collapse
|
34
|
Dziemba OC, Merz S, Hocke T. [Evaluative audiometry after cochlear implant provision. German Version]. HNO 2023; 71:669-677. [PMID: 37450021 PMCID: PMC10520209 DOI: 10.1007/s00106-023-01316-8] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/17/2023] [Indexed: 07/18/2023]
Abstract
BACKGROUND One of the main treatment goals in cochlear implant (CI) patients is to improve speech perception. One of the target parameters is speech intelligibility in quiet. However, treatment results show a high variability, which has not been sufficiently explained so far. The aim of this noninterventional retrospective study was to elucidate this variability using a selected population of patients in whom etiology was not expected to have a negative impact on postoperative speech intelligibility. MATERIALS AND METHODS Audiometric findings of the CI follow-up of 28 adult patients after 6 months of CI experience were evaluated. These were related to the preoperative audiometric examination and evaluated with respect to a recently published predictive model for the postoperative monosyllabic score. RESULTS Inclusion of postoperative categorical loudness scaling and hearing loss for Freiburg numbers in the model explained 55% of the variability in fitting outcomes with respect to monosyllabic comprehension. CONCLUSION The results of this study suggest that much of the cause of variability in fitting outcomes can be captured by systematic postoperative audiometric checks. Immediate conclusions for CI system adjustments may be drawn from these results. However, the extent to which these are accepted by individual patients and thus lead to an improvement in outcome must be subject to further study, preferably prospective.
Collapse
Affiliation(s)
- Oliver C Dziemba
- Klinik und Poliklinik für Hals‑, Nasen‑, Ohrenkrankheiten, Kopf- und Halschirurgie, Universitätsmedizin Greifswald, Ferdinand-Sauerbruch-Str., 17475, Greifswald, Deutschland.
| | | | - Thomas Hocke
- Cochlear Deutschland GmbH & Co. KG, Hannover, Deutschland
| |
Collapse
|
35
|
Hey M, Mewes A, Hocke T. Speech comprehension in noise-considerations for ecologically valid assessment of communication skills ability with cochlear implants. HNO 2023; 71:26-34. [PMID: 36480047 PMCID: PMC10409840 DOI: 10.1007/s00106-022-01232-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/10/2022] [Indexed: 12/13/2022]
Abstract
BACKGROUND Nowadays, cochlear implant (CI) patients mostly show good to very good speech comprehension in quiet, but there are known problems with communication in everyday noisy situations. There is thus a need for ecologically valid measurements of speech comprehension in real-life listening situations for hearing-impaired patients. The additional methodological effort must be balanced with clinical human and spatial resources. This study investigates possible simplifications of a complex measurement setup. METHODS The study included 20 adults from long-term follow-up after CI fitting with postlingual onset of hearing impairment. The complexity of the investigated listening situations was influenced by changing the spatiality of the noise sources and the temporal characteristics of the noise. To compare different measurement setups, speech reception thresholds (SRT) were measured unilaterally with different CI processors and settings. Ten normal-hearing subjects served as reference. RESULTS In a complex listening situation with four loudspeakers, differences in SRT from CI subjects to the control group of up to 8 dB were found. For CI subjects, this SRT correlated with the situation with frontal speech signal and fluctuating interference signal from the side with R2 = 0.69. For conditions with stationary interfering signals, R2 values <0.2 were found. CONCLUSION There is no universal solution for all audiometric questions with respect to the spatiality and temporal characteristics of noise sources. In the investigated context, simplification of the complex spatial audiometric setting while using fluctuating competing signals was possible.
Collapse
Affiliation(s)
- Matthias Hey
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Germany.
| | - Alexander Mewes
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Germany
| | | |
Collapse
|
36
|
Villard S, Perrachione TK, Lim SJ, Alam A, Kidd G. Energetic and informational masking place dissociable demands on listening effort: Evidence from simultaneous electroencephalography and pupillometrya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1152-1167. [PMID: 37610284 PMCID: PMC10449482 DOI: 10.1121/10.0020539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/24/2023]
Abstract
The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins.
Collapse
Affiliation(s)
- Sarah Villard
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ayesha Alam
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
37
|
Herbert N, Keller M, Derleth P, Kühnel V, Strelcyk O. Optimised adaptive procedures and analysis methods for conducting speech-in-noise tests. Int J Audiol 2023; 62:776-786. [PMID: 35791080 DOI: 10.1080/14992027.2022.2087112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 05/13/2022] [Accepted: 06/01/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Speech-in-noise testing is a valuable part of audiological test batteries. Test standardisation using precise methods is desirable for ease of administration. This study investigated the accuracy and reliability of different Bayesian and non-Bayesian adaptive procedures and analysis methods for conducting speech-in-noise testing. DESIGN Matrix sentence tests using different numbers of sentences (10, 20, 30 and 50) and target intelligibilities (50 and 75%) were simulated for modelled listeners with various characteristics. The accuracy and reliability of seven different measurement procedures and three different data analysis methods were assessed. RESULTS The estimation of 50% intelligibility was accurate and showed excellent reliability across the majority of methods tested, even with relatively few stimuli. Estimating 75% intelligibility resulted in decreased accuracy. For this target, more stimuli were required for sufficient accuracy and selected Bayesian procedures surpassed the performance of others. Some Bayesian procedures were also superior in the estimation of psychometric function width. CONCLUSIONS A single standardised procedure could improve the consistency of the matrix sentence test across a range of target intelligibilities. Candidate adaptive procedures and analysis methods are discussed. These could also be applicable for other speech materials. Further testing with human participants is required.
Collapse
Affiliation(s)
| | | | - Peter Derleth
- Research & Development, Sonova AG, Stäfa, Switzerland
| | - Volker Kühnel
- Research & Development, Sonova AG, Stäfa, Switzerland
| | | |
Collapse
|
38
|
Weißgerber T, Stöver T, Baumann U. [Speech perception in modulated noise assessed in bimodal CI users-German version]. HNO 2023:10.1007/s00106-023-01323-9. [PMID: 37395783 PMCID: PMC10403406 DOI: 10.1007/s00106-023-01323-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/16/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND Although good speech perception in quiet is achievable with cochlear implants (CIs), speech perception in noise is severely impaired compared to normal hearing (NH). In the case of a bimodal CI fitting with a hearing aid (HA) in the opposite ear, the amount of residual acoustic hearing influences speech perception in noise. OBJECTIVE The aim of this work was to investigate speech perception in noise in a group of bimodal CI users and compare the results to age-matched HA users and people without subjective hearing loss, as well as with a young NH group. MATERIALS AND METHODS Study participants comprised 19 bimodal CI users, 39 HA users, and 40 subjectively NH subjects in the age group 60-90 years and 14 young NH subjects. Speech reception thresholds (SRTs) in noise were adaptively measured using the Oldenburg Sentence Test for the two spatial test conditions S0N0 (speech and noise from the front) and multisource-noise field (MSNF; speech from the front, four spatially distributed noise sources) in continuous noise of the Oldenburg Sentence Test (Ol-noise) and amplitude-modulated Fastl noise (Fastl-noise). RESULTS With increasing hearing loss, the median SRT worsened significantly in all conditions. In test condition S0N0, the SRT of the CI group was 5.6 dB worse in Ol-noise than in the young NH group (mean age 26.4 years) and 22.5 dB worse in Fastl-noise; in MSNF, the differences were 6.6 dB (Ol-noise) and 17.3 dB (Fastl-noise), respectively. In the young NH group, median SRT in condition S0N0 improved by 11 dB due to gap listening; in the older NH group, SRTs improved by only 3.1 dB. In the HA and bimodal CI groups there was no gap listening effect and SRTs in Fastl-noise were even worse than in Ol-noise. CONCLUSION With increasing hearing loss, speech perception in modulated noise is even more impaired than in continuous noise.
Collapse
Affiliation(s)
- Tobias Weißgerber
- Schwerpunkt Audiologische Akustik, Klinik für HNO-Heilkunde, Universitätsklinikum Frankfurt, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt am Main, Deutschland.
| | - Timo Stöver
- Klinik für HNO-Heilkunde, Universitätsklinikum Frankfurt, Goethe-Universität Frankfurt, Frankfurt am Main, Deutschland
| | - Uwe Baumann
- Schwerpunkt Audiologische Akustik, Klinik für HNO-Heilkunde, Universitätsklinikum Frankfurt, Goethe-Universität Frankfurt, Theodor-Stern-Kai 7, 60590, Frankfurt am Main, Deutschland
| |
Collapse
|
39
|
Sulas E, Hasan PY, Zhang Y, Patou F. Streamlining experiment design in cognitive hearing science using OpenSesame. Behav Res Methods 2023; 55:1965-1979. [PMID: 35794416 PMCID: PMC10250502 DOI: 10.3758/s13428-022-01886-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2022] [Indexed: 11/08/2022]
Abstract
Auditory science increasingly builds on concepts and testing paradigms originated in behavioral psychology and cognitive neuroscience - an evolution of which the resulting discipline is now known as cognitive hearing science. Experimental cognitive hearing science paradigms call for hybrid cognitive and psychobehavioral tests such as those relating the attentional system, working memory, and executive functioning to low-level auditory acuity or speech intelligibility. Building complex multi-stimuli experiments can rapidly become time-consuming and error-prone. Platform-based experiment design can help streamline the implementation of cognitive hearing science experimental paradigms, promote the standardization of experiment design practices, and ensure reliability and control. Here, we introduce a set of features for the open-source python-based OpenSesame platform that allows the rapid implementation of custom behavioral and cognitive hearing science tests, including complex multichannel audio stimuli while interfacing with various synchronous inputs/outputs. Our integration includes advanced audio playback capabilities with multiple loudspeakers, an adaptive procedure, compatibility with standard I/Os and their synchronization through implementation of the Lab Streaming Layer protocol. We exemplify the capabilities of this extended OpenSesame platform with an implementation of the three-alternative forced choice amplitude modulation detection test and discuss reliability and performance. The new features are available free of charge from GitHub: https://github.com/elus-om/BRM_OMEXP .
Collapse
|
40
|
Müller-Deile J, Neben N, Dillier N, Büchner A, Mewes A, Junge F, Lai W, Schuessler M, Hey M. Comparisons of electrophysiological and psychophysical fitting methods for cochlear implants. Int J Audiol 2023; 62:118-128. [PMID: 34964676 DOI: 10.1080/14992027.2021.2015543] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
OBJECTIVE This study compared two different versions of an electrophysiology-based software-guided cochlear implant fitting method with a procedure employing standard clinical software. The two versions used electrically evoked compound action potential (ECAP) thresholds for either five or all twenty-two electrodes to determine sound processor stimulation level profiles. Objective and subjective performance results were compared between software-guided and clinical fittings. DESIGN Prospective, double-blind, single-subject repeated-measures with permuted ABCA sequences. STUDY SAMPLE 48 post linguistically deafened adults with ≤15 years of severe-to-profound deafness who were newly unilaterally implanted with a Nucleus device. RESULTS Speech recognition in noise and quiet was not significantly different between software- guided and standard methods, but there was a visit/learning-effect. However, the 5-electrode method gave scores on the SSQ speech subscale 0.5 points lower than the standard method. Clinicians judged usability for all methods as acceptable, as did subjects for comfort. Analysis of stimulation levels and ECAP thresholds suggested that the 5-electrode method could be refined. CONCLUSIONS Speech recognition was not inferior using either version of the electrophysiology-based software-guided fitting method compared with the standard method. Subject-reported speech perception was slightly inferior with the five-electrode method. Software-guided methods saved about 10 min of clinician's time versus standard fittings.
Collapse
Affiliation(s)
- Joachim Müller-Deile
- Audiology Consultant, Kiel-Holtenau, Germany.,Department of Otorhinolaryngology, Head and Neck Surgery, Christian-Albrechts-University of Kiel, Kiel, Germany
| | - Nicole Neben
- Cochlear Deutschland GmbH & Co. KG, Karl-Wiechert-Allee 76A, Hannover, Germany
| | - Norbert Dillier
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital and University of Zurich, Zurich, Switzerland
| | - Andreas Büchner
- German Hearing Centre at Hannover Medical School, Hannover, Germany
| | - Alexander Mewes
- Department of Otorhinolaryngology, Head and Neck Surgery, Christian-Albrechts-University of Kiel, Kiel, Germany
| | - Friederike Junge
- Cochlear Deutschland GmbH & Co. KG, Karl-Wiechert-Allee 76A, Hannover, Germany
| | - Waikong Lai
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital and University of Zurich, Zurich, Switzerland.,Next Sense Cochlear Implant Program, Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - Mark Schuessler
- German Hearing Centre at Hannover Medical School, Hannover, Germany
| | - Matthias Hey
- Department of Otorhinolaryngology, Head and Neck Surgery, Christian-Albrechts-University of Kiel, Kiel, Germany
| |
Collapse
|
41
|
Mönnich AL, Strieth S, Bohnert A, Ernst BP, Rader T. The German hearing in noise test with a female talker: development and comparison with German male speech test. Eur Arch Otorhinolaryngol 2023; 280:3157-3169. [PMID: 36635424 DOI: 10.1007/s00405-023-07820-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
PURPOSE The aim of the study was to develop the German Hearing in Noise Test (HINT) with female speaker by fulfilling the recommendations by International Collegium of Rehabilitative Audiology (ICRA) for using a female speaker to create new multilingual speech tests and to determine norms and to compare these norms with German male speech tests-the male speakers HINT and the Oldenburg Sentence Test (OLSA). METHODS The HINT with a female speaker consists of the same speech material as the male speaking HINT. After recording the speech material, 10 normal hearing subjects were included to determine the performance-intensity function (PI function). 24 subjects were part of the measurements to determine the norms and compare them with the norms of male HINT and OLSA. Comparably, adaptive, open-set methods under headphones (HINT) and sound field (OLSA) were used. RESULTS Acoustic phonetic analysis demonstrated significant difference in mean fundamental frequency, its range and mean speaking rate between both HINT speakers. The calculated norms by three of the tested four conditions of the HINT with a female speaker are not significantly different from the norms with a male speaker. No significant effect of the speaker's gender of the first HINT measurement and no significant correlation between the threshold results of the HINT and the OLSA were determined. CONCLUSIONS The Norms for German HINT with a female speaker are comparable to the norms of the HINT with a male speaker. The speech intelligibility score of the HINT does not depend on the speakers' gender despite significant difference of acoustic-phonetic parameters between the female and male HINT speaker's voice. Instead, the speech intelligibility rating must be seen as a function of the used speech material.
Collapse
Affiliation(s)
- Anna-Lena Mönnich
- Department of Otorhinolaryngology, University Medical Center Mainz, Mainz, Germany
| | - Sebastian Strieth
- Department of Otorhinolaryngology, University Medical Center Bonn (UKB), Bonn, Germany
| | - Andrea Bohnert
- Department of Otorhinolaryngology, University Medical Center Mainz, Mainz, Germany
| | | | - Tobias Rader
- Division of Audiology, Department of Otorhinolaryngology, University Hospital LMU Munich, Munich, Germany.
- Abteilung Audiologie, LMU Klinikum, Klinik für Hals-Nasen-Ohrenheilkunde, Marchioninistr. 15, 81377, Munich, Germany.
| |
Collapse
|
42
|
Dziemba OC, Oberhoffner T, Müller A. [OLSA level control in monaural speech audiometry in background noise for the evaluation of the CI fitting result]. HNO 2023; 71:100-105. [PMID: 36469098 PMCID: PMC9894967 DOI: 10.1007/s00106-022-01251-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/12/2022] [Indexed: 12/12/2022]
Abstract
SCIENTIFIC BACKGROUND Speech audiometry measurements under the influence of background noise are a fundamental part of evaluating the outcome of hearing care. As yet far, there are no recommendations for selecting a suitable method for adaptive speech audiometry measurements in background noise in cochlear implant (CI) care, so either the choice the adaptive level change of the speech signal (S) with constant noise (N) or the adaptive level change of N with constant S. OBJECTIVES Do the measurement results of the monaural speechrecognition threshold in noise (SRT) with the Oldenburg Sentence Test (OLSA) depend on the choice of level control? MATERIAL AND METHODS A total of 50 series of measurements with OLSA in noise and the Freiburg speech intelligibility test in quiet (FBE) on middle-aged CI patients from clinical routine. RESULTS There is no significant difference in the measurement results with different level controls when the SRT is less than 5 [Formula: see text]. Below 55 % monosyllabic intelligibility in quiet, the SRT in noise becomes greater than 5 [Formula: see text]. CONCLUSION From a clinical, audiological and methodological point of view, it is advisable to carry out the adaptive monaural speech intelligibility measurement with a constant speech signal at 65 [Formula: see text].
Collapse
Affiliation(s)
- Oliver C Dziemba
- Klinik und Poliklinik für Hals‑, Nasen‑, Ohrenkrankheiten, Kopf- und Halschirurgie, Universitätsmedizin Greifswald, Ferdinand-Sauerbruch-Straße, 17489, Greifswald, Deutschland.
| | - Tobias Oberhoffner
- Klinik und Poliklinik für Hals-Nasen-Ohrenheilkunde, Kopf- und Halschirurgie "Otto Körner", Universitätsmedizin Rostock, Rostock, Deutschland
| | - Alexander Müller
- Klinik für Hals-Nasen-Ohrenheilkunde, Kopf- und Halschirurgie, Plastische Operationen, Vivantes Hörzentrum Berlin (HZB), Vivantes Klinikum im Friedrichshain, Berlin, Deutschland
| |
Collapse
|
43
|
Effects of number of maxima and electrical dynamic range on speech-in-noise perception with an “n-of-m” cochlear-implant strategy. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
44
|
Jürgens T, Wesarg T, Oetting D, Jung L, Williges B. Spatial speech-in-noise performance in simulated single-sided deaf and bimodal cochlear implant users in comparison with real patients. Int J Audiol 2023; 62:30-43. [PMID: 34962428 DOI: 10.1080/14992027.2021.2015633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2021] [Revised: 11/30/2021] [Accepted: 12/03/2021] [Indexed: 01/05/2023]
Abstract
OBJECTIVE Speech reception thresholds (SRTs) in spatial scenarios were measured in simulated cochlear implant (CI) listeners with either contralateral normal hearing, or aided hearing impairment (bimodal), and compared to SRTs of real patients, who were measured using the exact same paradigm, to assess goodness of simulation. DESIGN CI listening was simulated using a vocoder incorporating actual CI signal processing and physiologic details of electric stimulation on one side. Unprocessed signals or simulation of aided moderate or profound hearing impairment was used contralaterally. Three spatial speech-in-noise scenarios were tested using virtual acoustics to assess spatial release from masking (SRM) and combined benefit. STUDY SAMPLE Eleven normal-hearing listeners participated in the experiment. RESULTS For contralateral normal and aided moderately impaired hearing, bilaterally assessed SRTs were not statistically different from unilateral SRTs of the better ear, indicating "better-ear-listening". Combined benefit was only found for contralateral profound impaired hearing. As in patients, SRM was highest for contralateral normal hearing and decreased systematically with more severe simulated impairment. Comparison to actual patients showed good reproduction of SRTs, SRM, and better-ear-listening. CONCLUSIONS The simulations reproduced better-ear-listening as in patients and suggest that combined benefit in spatial scenes predominantly occurs when both ears show poor speech-in-noise performance.
Collapse
Affiliation(s)
- Tim Jürgens
- Institute of Acoustics, University of Applied Sciences Lübeck, Lübeck, Germany
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
| | - Thomas Wesarg
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | | | - Lorenz Jung
- Faculty of Medicine, Department of Otorhinolaryngology - Head and Neck Surgery, Medical Center, University of Freiburg, Freiburg, Germany
| | - Ben Williges
- Medical Physics and Cluster of Excellence "Hearing4all", Carl-von-Ossietzky University, Oldenburg, Germany
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| |
Collapse
|
45
|
Rennies J, Warzybok A, Kollmeier B, Brand T. Spatio-temporal Integration of Speech Reflections in Hearing-Impaired Listeners. Trends Hear 2022; 26:23312165221143901. [PMID: 36537084 PMCID: PMC9772954 DOI: 10.1177/23312165221143901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
Speech recognition in rooms requires the temporal integration of reflections which arrive with a certain delay after the direct sound. It is commonly assumed that there is a certain temporal window of about 50-100 ms, during which reflections can be integrated with the direct sound, while later reflections are detrimental to speech intelligibility. This concept was challenged in a recent study by employing binaural room impulse responses (RIRs) with systematically varied interaural phase differences (IPDs) and amplitude of the direct sound and a variable number of reflections delayed by up to 200 ms. When amplitude or IPD favored late RIR components, normal-hearing (NH) listeners appeared to be capable of focusing on these components rather than on the precedent direct sound, which contrasted with the common concept of considering early RIR components as useful and late components as detrimental. The present study investigated speech intelligibility in the same conditions in hearing-impaired (HI) listeners. The data indicate that HI listeners were generally less able to "ignore" the direct sound than NH listeners, when the most useful information was confined to late RIR components. Some HI listeners showed a remarkable inability to integrate across multiple reflections and to optimally "shift" their temporal integration window, which was quite dissimilar to NH listeners. This effect was most pronounced in conditions requiring spatial and temporal integration and could provide new challenges for individual prediction models of binaural speech intelligibility.
Collapse
Affiliation(s)
- Jan Rennies
- Fraunhofer Institute for Digital Media Technology IDMT, Project Group Hearing, Speech and Audio Technology, Oldenburg, Germany,Cluster of Excellence Hearing4all, Oldenburg, Germany,Jan Rennies, Fraunhofer Institute for Digital Media Technology IDMT, Deparment for Hearing, Speech and Audio Technology, Marie-Curie-Str. 2, Oldenburg, Niedersachsen 26129, Germany.
| | - Anna Warzybok
- Medical Physics Group, Department für Medizinische Physik und Akustik, Oldenburg, Germany,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Birger Kollmeier
- Fraunhofer Institute for Digital Media Technology IDMT, Project Group Hearing, Speech and Audio Technology, Oldenburg, Germany,Medical Physics Group, Department für Medizinische Physik und Akustik, Oldenburg, Germany,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Thomas Brand
- Medical Physics Group, Department für Medizinische Physik und Akustik, Oldenburg, Germany,Cluster of Excellence Hearing4all, Oldenburg, Germany
| |
Collapse
|
46
|
Bayesian Adaptive Estimation with Theoretical Bound: An Exploration-Exploitation Approach. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1143056. [PMID: 36544859 PMCID: PMC9763008 DOI: 10.1155/2022/1143056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 11/12/2022] [Accepted: 11/25/2022] [Indexed: 12/14/2022]
Abstract
This paper investigates the theoretical bound to reduce the parameter uncertainty in Bayesian adaptive estimation for psychometric functions and proposes an exploration-exploitation (E-E) approach to improve the computation efficiency for parameter estimations. When the experimental trial goes on, the uncertainty of the parameters decreases dramatically and the space between the maximal mutual information and the theoretical bound gets narrower, so the advantage of classical Bayesian adaptive estimation algorithm diminishes. This approach tries to trade off the exploration (parameter posterior uncertainty) and the exploitation (parameter mean estimation). The experimental results show that the proposed E-E approach estimates parameters for psychometric functions with same convergence and reduces the computation time by more than 34.27%, compared with the classical Bayesian adaptive estimation.
Collapse
|
47
|
Hey M, Mewes A, Hocke T. [Speech comprehension in noise-considerations for ecologically valid assessment of communication skills ability with cochlear implants. German version]. HNO 2022; 70:861-869. [PMID: 36301326 PMCID: PMC9691490 DOI: 10.1007/s00106-022-01234-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/01/2022] [Indexed: 11/25/2022]
Abstract
BACKGROUND Nowadays, cochlear implant (CI) patients mostly show good to very good speech comprehension in quiet, but there are known problems with communication in everyday noisy situations. There is thus a need for ecologically valid measurements of speech comprehension in real-life listening situations for hearing-impaired patients. The additional methodological effort must be balanced with clinical human and spatial resources. This study investigates possible simplifications of a complex measurement setup. METHODS The study included 20 adults from long-term follow-up after CI fitting with postlingual onset of hearing impairment. The complexity of the investigated listening situations was influenced by changing the spatiality of the noise sources and the temporal characteristics of the noise. To compare different measurement setups, speech reception thresholds (SRT) were measured unilaterally with different CI processors and settings. Ten normal-hearing subjects served as reference. RESULTS In a complex listening situation with four loudspeakers, differences in SRT from CI subjects to the control group of up to 8 dB were found. For CI subjects, this SRT correlated with the situation with frontal speech signal and fluctuating interference signal from the side with R2 = 0.69. For conditions with stationary interfering signals, R2 values <0.2 were found. CONCLUSION There is no universal solution for all audiometric questions with respect to the spatiality and temporal characteristics of noise sources. In the investigated context, simplification of the complex spatial audiometric setting while using fluctuating competing signals was possible.
Collapse
Affiliation(s)
- Matthias Hey
- Klinik für Hals-Nasen-Ohren-Heilkunde, Kopf- und Halschirurgie; Audiologie, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Deutschland.
| | - Alexander Mewes
- Klinik für Hals-Nasen-Ohren-Heilkunde, Kopf- und Halschirurgie; Audiologie, UKSH, Campus Kiel, Arnold-Heller-Straße 14, 24105, Kiel, Deutschland
| | | |
Collapse
|
48
|
Ozmeral EJ, Higgins NC. Defining functional spatial boundaries using a spatial release from masking task. JASA EXPRESS LETTERS 2022; 2:124402. [PMID: 36586966 PMCID: PMC9720634 DOI: 10.1121/10.0015356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 11/11/2022] [Indexed: 06/17/2023]
Abstract
The classic spatial release from masking (SRM) task measures speech recognition thresholds for discrete separation angles between a target and masker. Alternatively, this study used a modified SRM task that adaptively measured the spatial-separation angle needed between a continuous male target stream (speech with digits) and two female masker streams to achieve a specific SRM. On average, 20 young normal-hearing listeners needed less spatial separation for 6 dB release than 9 dB release, and the presence of background babble reduced across-listener variability on the paradigm. Future work is needed to better understand the psychometric properties of this adaptive procedure.
Collapse
Affiliation(s)
- Erol J Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA ,
| | - Nathan C Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, Florida 33620, USA ,
| |
Collapse
|
49
|
Krueger M, Schulte M, Brand T. Assessing and Modeling Spatial Release From Listening Effort in Listeners With Normal Hearing: Reference Ranges and Effects of Noise Direction and Age. Trends Hear 2022; 26:23312165221129407. [PMID: 36285532 PMCID: PMC9618758 DOI: 10.1177/23312165221129407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
Listening to speech in noisy environments is challenging and effortful. Factors like the signal-to-noise ratio (SNR), the spatial separation between target speech and noise interferer(s), and possibly also the listener's age might influence perceived listening effort (LE). This study measured and modeled the effect of the spatial separation of target speech and interfering stationary speech-shaped noise on the perceived LE and its relation to the age of the listeners. Reference ranges for the relationship between subjectively perceived LE and SNR for different noise azimuths were established. For this purpose, 70 listeners with normal hearing and from three age groups rated the perceived LE using the Adaptive Categorical Listening Effort Scaling method (ACALES, Krueger et al., 2017a) with speech from the front and noise from 0°, 90°, 135°, or 180° azimuth. Based on these data, the spatial release from listening effort (SRLE) was calculated. The noise azimuth had a strong effect on SRLE, with the highest release for 135°. The binaural speech intelligibility model (BSIM2020, Hauth et al., 2020) predicted SRLE very well at negative SNRs, but overestimated for positive SNRs. No significant effect of age was found on the respective subjective ratings. Therefore, the reference ranges were determined independently of age. These reference ranges can be used for the classification of LE measurements. However, when the increase of the perceived LE with SNR was analyzed, a significant age difference was found between the listeners of the youngest and oldest group when considering the upper range of the LE function.
Collapse
Affiliation(s)
- Melanie Krueger
- Hörzentrum Oldenburg gGmbH, Oldenburg, Germany,Melanie Krueger, Hörzentrum Oldenburg gGmbH, Marie-Curie-Straße 2, D-26129 Oldenburg, Germany.
| | | | - Thomas Brand
- Medizinische Physik, Department für Medizinische Physik und Akustik, Fakultät VI, Carl-von-Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
50
|
Ibelings S, Brand T, Holube I. Speech Recognition and Listening Effort of Meaningful Sentences Using Synthetic Speech. Trends Hear 2022; 26:23312165221130656. [PMID: 36203405 PMCID: PMC9549212 DOI: 10.1177/23312165221130656] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Speech-recognition tests are an important component of audiology. However, the
development of such tests can be time consuming. The aim of this study was to
investigate whether a Text-To-Speech (TTS) system can reduce the cost of
development, and whether comparable results can be achieved in terms of speech
recognition and listening effort. For this, the everyday sentences of the German
Göttingen sentence test were synthesized for both a female and a male speaker
using a TTS system. In a preliminary study, this system was rated as good, but
worse than the natural reference. Due to the Covid-19 pandemic, the measurements
took place online. Each set of speech material was presented at three fixed
signal-to-noise ratios. The participants’ responses were recorded and analyzed
offline. Compared to the natural speech, the adjusted psychometric functions for
the synthetic speech, independent of the speaker, resulted in an improvement of
the speech-recognition threshold (SRT) by approximately 1.2 dB. The slopes,
which were independent of the speaker, were about 15 percentage points per dB.
The time periods between the end of the stimulus presentation and the beginning
of the verbal response (verbal response time) were comparable for all speakers,
suggesting no difference in listening effort. The SRT values obtained in the
online measurement for the natural speech were comparable to published data. In
summary, the time and effort for the development of speech-recognition tests may
be significantly reduced by using a TTS system. This finding provides the
opportunity to develop new speech tests with a large amount of speech
material.
Collapse
Affiliation(s)
- Saskia Ibelings
- Institute of Hearing Technology and Audiology, Jade University of
Applied Sciences, Oldenburg, Germany,Medizinische Physik, Universität Oldenburg, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany,Saskia Ibelings, Institute of Hearing
Technology and Audiology, Jade University of Applied Sciences, Ofener Str.
16/19, D-26121 Oldenburg, Germany.
| | - Thomas Brand
- Medizinische Physik, Universität Oldenburg, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of
Applied Sciences, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|