1
|
Zaar J, Simonsen LB, Sanchez-Lopez R, Laugesen S. The Audible Contrast Threshold (ACT) test: A clinical spectro-temporal modulation detection test. Hear Res 2024; 453:109103. [PMID: 39243488 DOI: 10.1016/j.heares.2024.109103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2024] [Revised: 07/12/2024] [Accepted: 08/12/2024] [Indexed: 09/09/2024]
Abstract
Over the last decade, multiple studies have shown that hearing-impaired listeners' speech-in-noise reception ability, measured with audibility compensation, is closely associated with performance in spectro-temporal modulation (STM) detection tests. STM tests thus have the potential to provide highly relevant beyond-the-audiogram information in the clinic, but the available STM tests have not been optimized for clinical use in terms of test duration, required equipment, and procedural standardization. The present study introduces a quick-and-simple clinically viable STM test, named the Audible Contrast Threshold (ACT™) test. First, an experimenter-controlled STM measurement paradigm was developed, in which the patient is presented bilaterally with a continuous audibility-corrected noise via headphones and asked to press a pushbutton whenever they hear an STM target sound in the noise. The patient's threshold is established using a Hughson-Westlake tracking procedure with a three-out-of-five criterion and then refined by post-processing the collected data using a logistic function. Different stimulation paradigms were tested in 28 hearing-impaired participants and compared to data previously measured in the same participants with an established STM test paradigm. The best stimulation paradigm showed excellent test-retest reliability and good agreement with the established laboratory version. Second, the best stimulation paradigm with 1-second noise "waves" (windowed noise) was chosen, further optimized with respect to step size and logistic-function fitting, and tested in a population of 25 young normal-hearing participants using various types of transducers to obtain normative data. Based on these normative data, the "normalized Contrast Level" (in dB nCL) scale was defined, where 0 ± 4 dB nCL corresponds to normal performance and elevated dB nCL values indicate the degree of audible contrast loss. Overall, the results of the present study suggest that the ACT test may be considered a reliable, quick-and-simple (and thus clinically viable) test of STM sensitivity. The ACT can be measured directly after the audiogram using the same set up, adding only a few minutes to the process.
Collapse
Affiliation(s)
- Johannes Zaar
- Eriksholm Research Centre, Rørtangvej 20, 3070 Snekkersten, Denmark; Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| | - Lisbeth Birkelund Simonsen
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark.
| | - Raul Sanchez-Lopez
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark; Institute of Globally Distributed Open Research and Education (IGDORE)
| | - Søren Laugesen
- Interacoustics Research Unit, Ørsteds Plads, Building 352, 2800 Kgs. Lyngby, Denmark
| |
Collapse
|
2
|
Buth S, Baljić I, Mewes A, Hey M. [Speech discrimination with separated signal sources and sound localization with speech stimuli : Learning effects and reproducibility]. HNO 2024; 72:504-514. [PMID: 38536465 PMCID: PMC11192817 DOI: 10.1007/s00106-024-01426-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/12/2023] [Indexed: 06/22/2024]
Abstract
BACKGROUND Binaural hearing enables better speech comprehension in noisy environments and is necessary for acoustic spatial orientation. This study investigates speech discrimination in noise with separated signal sources and measures sound localization. The aim was to study characteristics and reproducibility of two selected measurement techniques which seem to be suitable for description of the aforementioned aspects of binaural hearing. MATERIALS AND METHODS Speech reception thresholds (SRT) in noise and test-retest reliability were collected from 55 normal-hearing adults for a spatial setup of loudspeakers with angles of ± 45° and ± 90° using the Oldenburg sentence test. The investigations of sound localization were conducted in a semicircle and fullcircle setup (7 and 12 equidistant loudspeakers). RESULTS SRT (S-45N45: -14.1 dB SNR; S45N-45: -16.4 dB SNR; S0N90: -13.1 dB SNR; S0N-90: -13.4 dB SNR) and test-retest reliability (4 to 6 dB SNR) were collected for speech intelligibility in noise with separated signals. The procedural learning effect for this setup could only be mitigated with 120 training sentences. Significantly smaller SRT values, resulting in better speech discrimination, were found for the test situation of the right compared to the left ear. RMS values could be gathered for sound localization in the semicircle (1,9°) as well as in the fullcircle setup (11,1°). Better results were obtained in the retest of the fullcircle setup. CONCLUSION When using the Oldenburg sentence test in noise with spatially separated signals, it is mandatory to perform a training session of 120 sentences in order to minimize the procedural learning effect. Ear-specific SRT values for speech discrimination in noise with separated signal sources are required, which is probably due to the right-ear advantage. A training is recommended for sound localization in the fullcircle setup.
Collapse
Affiliation(s)
- Svenja Buth
- Medizinische Fakultät, Christian-Albrechts-Universität zu Kiel, Kiel, Deutschland.
- HNO-Klinik, Audiologie, Campus Kiel, Universitätsklinikum Schleswig-Holstein, Arnold-Heller-Str. 3, Haus B1, 24105, Kiel, Deutschland.
| | - Izet Baljić
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Audiologisches Zentrum, Helios Klinikum Erfurt, Erfurt, Deutschland
| | - Alexander Mewes
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Kopf- und Halschirurgie, Audiologie, UKSH, Kiel, Deutschland
| | - Matthias Hey
- Klinik für Hals‑, Nasen‑, Ohrenheilkunde, Kopf- und Halschirurgie, Audiologie, UKSH, Kiel, Deutschland
| |
Collapse
|
3
|
Lie S, Zekveld AA, Smits C, Kramer SE, Versfeld NJ. Learning effects in speech-in-noise tasks: Effect of masker modulation and masking release. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:341-349. [PMID: 38990038 DOI: 10.1121/10.0026519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Accepted: 06/19/2024] [Indexed: 07/12/2024]
Abstract
Previous research has shown that learning effects are present for speech intelligibility in temporally modulated (TM) noise, but not in stationary noise. The present study aimed to gain more insight into the factors that might affect the time course (the number of trials required to reach stable performance) and size [the improvement in the speech reception threshold (SRT)] of the learning effect. Two hypotheses were addressed: (1) learning effects are present in both TM and spectrally modulated (SM) noise and (2) the time course and size of the learning effect depend on the amount of masking release caused by either TM or SM noise. Eighteen normal-hearing adults (23-62 years) participated in SRT measurements, in which they listened to sentences in six masker conditions, including stationary, TM, and SM noise conditions. The results showed learning effects in all TM and SM noise conditions, but not for the stationary noise condition. The learning effect was related to the size of masking release: a larger masking release was accompanied by an increased time course of the learning effect and a larger learning effect. The results also indicate that speech is processed differently in SM noise than in TM noise.
Collapse
Affiliation(s)
- Sisi Lie
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Cas Smits
- Amsterdam UMC, University of Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Meibergdreef, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, De Boelelaan, Amsterdam Public Health research institute, Amsterdam, The Netherlands
| |
Collapse
|
4
|
Han JS, Lim JH, Kim Y, Aliyeva A, Seo JH, Lee J, Park SN. Hearing Rehabilitation With a Chat-Based Mobile Auditory Training Program in Experienced Hearing Aid Users: Prospective Randomized Controlled Study. JMIR Mhealth Uhealth 2024; 12:e50292. [PMID: 38329324 PMCID: PMC10867308 DOI: 10.2196/50292] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 12/11/2023] [Accepted: 12/11/2023] [Indexed: 02/09/2024] Open
Abstract
Background Hearing rehabilitation with auditory training (AT) is necessary to improve speech perception ability in patients with hearing loss. However, face-to-face AT has not been widely implemented due to its high cost and personnel requirements. Therefore, there is a need for the development of a patient-friendly, mobile-based AT program. Objective In this study, we evaluated the effectiveness of hearing rehabilitation with our chat-based mobile AT (CMAT) program for speech perception performance among experienced hearing aid (HA) users. Methods A total of 42 adult patients with hearing loss who had worn bilateral HAs for more than 3 months were enrolled and randomly allocated to the AT or control group. In the AT group, CMAT was performed for 30 minutes a day for 2 months, while no intervention was provided in the control group. During the study, 2 patients from the AT group and 1 patient from the control group dropped out. At 0-, 1- and 2-month visits, results of hearing tests and speech perception tests, compliance, and questionnaires were prospectively collected and compared in the 2 groups. Results The AT group (n=19) showed better improvement in word and sentence perception tests compared to the control group (n=20; P=.04 and P=.03, respectively), while no significant difference was observed in phoneme and consonant perception tests (both P>.05). All participants were able to use CMAT without any difficulties, and 85% (17/20) of the AT group completed required training sessions. There were no changes in time or completion rate between the first and the second month of AT. No significant difference was observed between the 2 groups in questionnaire surveys. Conclusions After using the CMAT program, word and sentence perception performance was significantly improved in experienced HA users. In addition, CMAT showed high compliance and adherence over the 2-month study period. Further investigations are needed to validate long-term efficacy in a larger population. TRIAL REGISTRATION Clinical Research Information Service (CRiS) KCT0006509; https://cris.nih.go.kr/cris/search/detailSearch.do?seq=22110&search_page=L.
Collapse
Affiliation(s)
- Jae Sang Han
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Ji Hyung Lim
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Yeonji Kim
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Aynur Aliyeva
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
- Department of Pediatric Otolaryngology, Cincinnati Children’s Hospital, CincinnatiOH, United States
| | - Jae-Hyun Seo
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| | - Jaehyuk Lee
- Nara Information Co, Ltd, Seoul, Republic of Korea
| | - Shi Nae Park
- Department of Otorhinolaryngology–Head and Neck Surgery, Seoul St. Mary’s Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea
| |
Collapse
|
5
|
Ibelings S, Brand T, Ruigendijk E, Holube I. Development of a Phrase-Based Speech-Recognition Test Using Synthetic Speech. Trends Hear 2024; 28:23312165241261490. [PMID: 39051703 PMCID: PMC11273571 DOI: 10.1177/23312165241261490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 05/16/2024] [Accepted: 05/27/2024] [Indexed: 07/27/2024] Open
Abstract
Speech-recognition tests are widely used in both clinical and research audiology. The purpose of this study was the development of a novel speech-recognition test that combines concepts of different speech-recognition tests to reduce training effects and allows for a large set of speech material. The new test consists of four different words per trial in a meaningful construct with a fixed structure, the so-called phrases. Various free databases were used to select the words and to determine their frequency. Highly frequent nouns were grouped into thematic categories and combined with related adjectives and infinitives. After discarding inappropriate and unnatural combinations, and eliminating duplications of (sub-)phrases, a total number of 772 phrases remained. Subsequently, the phrases were synthesized using a text-to-speech system. The synthesis significantly reduces the effort compared to recordings with a real speaker. After excluding outliers, measured speech-recognition scores for the phrases with 31 normal-hearing participants at fixed signal-to-noise ratios (SNR) revealed speech-recognition thresholds (SRT) for each phrase varying up to 4 dB. The median SRT was -9.1 dB SNR and thus comparable to existing sentence tests. The psychometric function's slope of 15 percentage points per dB is also comparable and enables efficient use in audiology. Summarizing, the principle of creating speech material in a modular system has many potential applications.
Collapse
Affiliation(s)
- Saskia Ibelings
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Thomas Brand
- Medizinische Physik, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Esther Ruigendijk
- Cluster of Excellence Hearing4All, Oldenburg, Germany
- Department of Dutch, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
6
|
Ibelings S, Brand T, Holube I. Speech Recognition and Listening Effort of Meaningful Sentences Using Synthetic Speech. Trends Hear 2022; 26:23312165221130656. [PMID: 36203405 PMCID: PMC9549212 DOI: 10.1177/23312165221130656] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
Speech-recognition tests are an important component of audiology. However, the
development of such tests can be time consuming. The aim of this study was to
investigate whether a Text-To-Speech (TTS) system can reduce the cost of
development, and whether comparable results can be achieved in terms of speech
recognition and listening effort. For this, the everyday sentences of the German
Göttingen sentence test were synthesized for both a female and a male speaker
using a TTS system. In a preliminary study, this system was rated as good, but
worse than the natural reference. Due to the Covid-19 pandemic, the measurements
took place online. Each set of speech material was presented at three fixed
signal-to-noise ratios. The participants’ responses were recorded and analyzed
offline. Compared to the natural speech, the adjusted psychometric functions for
the synthetic speech, independent of the speaker, resulted in an improvement of
the speech-recognition threshold (SRT) by approximately 1.2 dB. The slopes,
which were independent of the speaker, were about 15 percentage points per dB.
The time periods between the end of the stimulus presentation and the beginning
of the verbal response (verbal response time) were comparable for all speakers,
suggesting no difference in listening effort. The SRT values obtained in the
online measurement for the natural speech were comparable to published data. In
summary, the time and effort for the development of speech-recognition tests may
be significantly reduced by using a TTS system. This finding provides the
opportunity to develop new speech tests with a large amount of speech
material.
Collapse
Affiliation(s)
- Saskia Ibelings
- Institute of Hearing Technology and Audiology, Jade University of
Applied Sciences, Oldenburg, Germany,Medizinische Physik, Universität Oldenburg, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany,Saskia Ibelings, Institute of Hearing
Technology and Audiology, Jade University of Applied Sciences, Ofener Str.
16/19, D-26121 Oldenburg, Germany.
| | - Thomas Brand
- Medizinische Physik, Universität Oldenburg, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of
Applied Sciences, Oldenburg, Germany,Cluster of Excellence Hearing4All, Oldenburg, Germany
| |
Collapse
|
7
|
Karawani H, Jenkins K, Anderson S. Neural Plasticity Induced by Hearing Aid Use. Front Aging Neurosci 2022; 14:884917. [PMID: 35663566 PMCID: PMC9160992 DOI: 10.3389/fnagi.2022.884917] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 04/28/2022] [Indexed: 12/21/2022] Open
Abstract
Age-related hearing loss is one of the most prevalent health conditions in older adults. Although hearing aid technology has advanced dramatically, a large percentage of older adults do not use hearing aids. This untreated hearing loss may accelerate declines in cognitive and neural function and dramatically affect the quality of life. Our previous findings have shown that the use of hearing aids improves cortical and cognitive function and offsets subcortical physiological decline. The current study tested the time course of neural adaptation to hearing aids over the course of 6 months and aimed to determine whether early measures of cortical processing predict the capacity for neural plasticity. Seventeen (9 females) older adults (mean age = 75 years) with age-related hearing loss with no history of hearing aid use were fit with bilateral hearing aids and tested in six testing sessions. Neural changes were observed as early as 2 weeks following the initial fitting of hearing aids. Increases in N1 amplitudes were observed as early as 2 weeks following the hearing aid fitting, whereas changes in P2 amplitudes were not observed until 12 weeks of hearing aid use. The findings suggest that increased audibility through hearing aids may facilitate rapid increases in cortical detection, but a longer time period of exposure to amplified sound may be required to integrate features of the signal and form auditory object representations. The results also showed a relationship between neural responses in earlier sessions and the change predicted after 6 months of the use of hearing aids. This study demonstrates rapid cortical adaptation to increased auditory input. Knowledge of the time course of neural adaptation may aid audiologists in counseling their patients, especially those who are struggling to adjust to amplification. A future comparison of a control group with no use of hearing aids that undergoes the same testing sessions as the study's group will validate these findings.
Collapse
Affiliation(s)
- Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Kimberly Jenkins
- Walter Reed National Military Medical Center, Bethesda, MD, United States
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, MD, United States
| |
Collapse
|
8
|
van Wieringen A, Magits S, Francart T, Wouters J. Home-Based Speech Perception Monitoring for Clinical Use With Cochlear Implant Users. Front Neurosci 2021; 15:773427. [PMID: 34916902 PMCID: PMC8669965 DOI: 10.3389/fnins.2021.773427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 10/28/2021] [Indexed: 12/02/2022] Open
Abstract
Speech-perception testing is essential for monitoring outcomes with a hearing aid or cochlear implant (CI). However, clinical care is time-consuming and often challenging with an increasing number of clients. A potential approach to alleviating some clinical care and possibly making room for other outcome measures is to employ technologies that assess performance in the home environment. In this study, we investigate 3 different speech perception indices in the same 40 CI users: phoneme identification (vowels and consonants), digits in noise (DiN) and sentence recognition in noise (SiN). The first two tasks were implemented on a tablet and performed multiple times by each client in their home environment, while the sentence task was administered at the clinic. Speech perception outcomes in the same forty CI users showed that DiN assessed at home can serve as an alternative to SiN assessed at the clinic. DiN scores are in line with the SiN ones by 3–4 dB improvement and are useful to monitor performance at regular intervals and to detect changes in auditory performance. Phoneme identification in quiet also explains a significant part of speech perception in noise, and provides additional information on the detectability and discriminability of speech cues. The added benefit of the phoneme identification task, which also proved to be easy to administer at home, is the information transmission analysis in addition to the summary score. Performance changes for the different indices can be interpreted by comparing against measurement error and help to target personalized rehabilitation. Altogether, home-based speech testing is reliable and proves powerful to complement care in the clinic for CI users.
Collapse
Affiliation(s)
| | - Sara Magits
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Tom Francart
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
9
|
Zhang M, Moncrieff D, Johnston D, Parfitt M, Auld R. A preliminary study on speech recognition in noise training for children with hearing loss. Int J Pediatr Otorhinolaryngol 2021; 149:110843. [PMID: 34340007 DOI: 10.1016/j.ijporl.2021.110843] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 06/27/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE The current study is a preliminary study to examine whether children with hearing loss would benefit from a speech recognition in noise training. METHODS Twenty-five children who wore hearing aids, cochlear implants, or bimodal devices from 4 to 12 years old participated in the study (experimental, n = 16; control, n = 9). The experimental group received a speech-in-noise training that took sixteen 15-min sessions spanning 8 to 12 weeks. The task involves recognizing monosyllabic target words and sentence keywords with various contextual cues in a multi-talker babble. The target stimuli were spoken by two females and fixed at 65 dB SPL throughout the training while the masker varied adaptively. Pre- and post-training tests measured the speech recognition thresholds of monosyllabic words and sentences spoken by two males in the babble noise. The test targets were presented at 55, 65, and 80 dB SPL. RESULTS The experimental group improved for word and sentence recognition in noise after training (Mean Difference = 2.4-2.5 dB, 2.7-4.2 dB, respectively). Training benefits were observed at trained (65 dB SPL) and untrained levels (55 and 80 dB SPL). The amount of post-training improvement was comparable between children using hearing aids and cochlear implants. CONCLUSIONS This preliminary study showed that children with hearing loss could benefit from a speech recognition in noise training that may fit into the children's school schedules. Training at a conversational level (65 dB SPL) transfers the benefit to levels 10-15 dB softer or louder. Training with female target talkers transfers the benefit to male target talkers. Overall, speech in noise training brings practical benefits for school-age children with hearing loss.
Collapse
Affiliation(s)
- Mengchao Zhang
- Department of Communication Science and Disorders, University of Pittsburgh, 6035 Forbes Tower, Pittsburgh, PA, 15260, USA.
| | - Deborah Moncrieff
- School of Communication Sciences and Disorders, University of Memphis, 4055 N. Park Loop, Memphis, TN, 38152, USA
| | - Deborrah Johnston
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| | - Michelle Parfitt
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| | - Ruth Auld
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| |
Collapse
|
10
|
Jett B, Buss E, Best V, Oleson J, Calandruccio L. Does Sentence-Level Coarticulation Affect Speech Recognition in Noise or a Speech Masker? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1390-1403. [PMID: 33784185 PMCID: PMC8608179 DOI: 10.1044/2021_jslhr-20-00450] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 12/04/2020] [Accepted: 01/05/2021] [Indexed: 06/12/2023]
Abstract
Purpose Three experiments were conducted to better understand the role of between-word coarticulation in masked speech recognition. Specifically, we explored whether naturally coarticulated sentences supported better masked speech recognition as compared to sentences derived from individually spoken concatenated words. We hypothesized that sentence recognition thresholds (SRTs) would be similar for coarticulated and concatenated sentences in a noise masker but would be better for coarticulated sentences in a speech masker. Method Sixty young adults participated (n = 20 per experiment). An adaptive tracking procedure was used to estimate SRTs in the presence of noise or two-talker speech maskers. Targets in Experiments 1 and 2 were matrix-style sentences, while targets in Experiment 3 were semantically meaningful sentences. All experiments included coarticulated and concatenated targets; Experiments 2 and 3 included a third target type, concatenated keyword-intensity-matched (KIM) sentences, in which the words were concatenated but individually scaled to replicate the intensity contours of the coarticulated sentences. Results Regression analyses evaluated the main effects of target type, masker type, and their interaction. Across all three experiments, effects of target type were small (< 2 dB). In Experiment 1, SRTs were slightly poorer for coarticulated than concatenated sentences. In Experiment 2, coarticulation facilitated speech recognition compared to the concatenated KIM condition. When listeners had access to semantic context (Experiment 3), a coarticulation benefit was observed in noise but not in the speech masker. Conclusions Overall, differences between SRTs for sentences with and without between-word coarticulation were small. Beneficial effects of coarticulation were only observed relative to the concatenated KIM targets; for unscaled concatenated targets, it appeared that consistent audibility across the sentence offsets any benefit of coarticulation. Contrary to our hypothesis, effects of coarticulation generally were not more pronounced in speech maskers than in noise maskers.
Collapse
Affiliation(s)
- Brandi Jett
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Virginia Best
- Department of Speech, Language and Hearing Sciences, Boston University, MA
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
11
|
Schramm D, Chen J, Morris DP, Shoman N, Philippon D, Cayé-Thomasen P, Hoen M, Karoui C, Laplante-Lévesque A, Gnansia D. Clinical efficiency and safety of the oticon medical neuro cochlear implant system: a multicenter prospective longitudinal study. Expert Rev Med Devices 2020; 17:959-967. [PMID: 32885711 DOI: 10.1080/17434440.2020.1814741] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVE This prospective longitudinal cohort study at six tertiary referral centers in Canada and Denmark describes the clinical efficiency and surgical safety of cochlear implantation with the Oticon Medical Neuro cochlear implant system, including the Neuro Zti implant, the EVO electrode array, and the Neuro One sound processor. METHODS Patients were adult cochlear implant candidates with bilateral sensorineural hearing loss. RESULTS The mean HINT scores in quiet pre-operatively and at 3, 6, and 12 months post-activation were 13%, 58%, 67%, and 72%, respectively, and in noise (+10 dB SNR) 13%, 46%, 53%, and 59%, respectively. The mean improvement from baseline to 6 months post-activation was 54% in quiet and 40% in noise. The surgical major complication incidence rate was 0% and the post-surgical major complication incidence rate (until 12 months post-activation) was 4%. There was no adverse event that was fatal, that required explantation, or that resulted in sound processor nonuse, and no implant failure. CONCLUSION Cochlear implantation with the Oticon Medical Neuro system enables speech identification both in quiet and in noise and audiologic outcomes continue to improve in the year following activation. No substantial adverse events occurred during the surgical implantation procedure and during the 12 months post-activation.
Collapse
Affiliation(s)
- David Schramm
- Department of Otolaryngology - Head and Neck Surgery, University of Ottawa , Ottawa, Canada
| | - Joseph Chen
- Department ofOtolaryngology- Head & Neck Surgery, Sunnybrook Hospital , Toronto, Canada
| | - David P Morris
- Division of Otolaryngology -Head & Neck Surgery, Continuing Professional Development, Dalhousie University , Halifax, Canada
| | - Nael Shoman
- Division of ENT, Head and Neck Surgery, Royal University Hospital , Saskatoon, Canada
| | - Daniel Philippon
- Département d'ophtalmologie et d'oto-rhino-laryngologie - chirurgie cervico-faciale, Quebec University Hospital , Quebec, Canada
| | - Per Cayé-Thomasen
- Afdeling for Øre-Næse-Halskirurgi og Audiologi, Copenhagen University Hospital Rigshospitalet , Copenhagen, Denmark.,Faculty of Health and Medical Sciences, University of Copenhagen , Copenhagen, Denmark
| | - Michel Hoen
- Clinical Evidence, Oticon Medical , Smørum, Denmark
| | | | - Ariane Laplante-Lévesque
- Clinical Evidence, Oticon Medical , Smørum, Denmark.,Department of Behavioural Sciences and Learning, Linköping University , Linköping, Sweden
| | - Dan Gnansia
- Research & Technology, Oticon Medical , Smørum, Denmark
| |
Collapse
|
12
|
Willberg T, Sivonen V, Hurme S, Aarnisalo AA, Löppönen H, Dietz A. The long-term learning effect related to the repeated use of the Finnish matrix sentence test and the Finnish digit triplet test. Int J Audiol 2020; 59:753-762. [PMID: 32338546 DOI: 10.1080/14992027.2020.1753893] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Objectives: To assess are there learning-related improvements in the speech reception thresholds (SRTs) for the Finnish matrix sentence test (FMST) and the Finnish digit triplet test (FDTT) in repeated use over 12 months.Design: Test sessions were scheduled at 0, 1, 3, 6 and 12 months, and each session included five FMST measurements and four FDTT measurements. The within-session and inter-session improvements in SRTs were analysed with a linear mixed model.Study sample: Fifteen young normal-hearing participants.Results: Statistically significant mean improvements of 2.0 dB SNR and 1.2 dB SNR were detected for the FMST and the FDTT, respectively, over the 12-month follow-up period. For the FMST, majority of the improvement occurred during the first two test sessions. For the FDTT, statistically significant differences were detected only in comparison to the first test session and to the first test measurement of every session over the 12-month follow-up.Conclusions: Repeated use of the FMST led to significant learning-related improvements, but the improvements appeared to plateau by the third test session. For the FDTT, the overall improvements were smaller, but a significant within-session difference between the first and consecutive FDTT measurements persisted throughout the test sessions.
Collapse
Affiliation(s)
- Tytti Willberg
- Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.,Department of Otorhinolaryngology, Turku University Hospital, Turku, Finland
| | - Ville Sivonen
- Department of Otorhinolaryngology, Helsinki University Hospital, Helsinki, Finland
| | - Saija Hurme
- Department of Biostatistics, University of Turku, Turku, Finland
| | - Antti A Aarnisalo
- Department of Otorhinolaryngology, Helsinki University Hospital, Helsinki, Finland
| | - Heikki Löppönen
- Institute of Clinical Medicine, University of Eastern Finland, Kuopio, Finland.,Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
| |
Collapse
|
13
|
Moradi S, Lidestam B, Ning Ng EH, Danielsson H, Rönnberg J. Perceptual Doping: An Audiovisual Facilitation Effect on Auditory Speech Processing, From Phonetic Feature Extraction to Sentence Identification in Noise. Ear Hear 2019; 40:312-327. [PMID: 29870521 PMCID: PMC6400397 DOI: 10.1097/aud.0000000000000616] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 04/15/2018] [Indexed: 11/25/2022]
Abstract
OBJECTIVE We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. DESIGN The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. RESULTS In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. CONCLUSIONS The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
- Oticon A/S, Smørum, Denmark
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
14
|
Kressner AA, May T, Dau T. Effect of Noise Reduction Gain Errors on Simulated Cochlear Implant Speech Intelligibility. Trends Hear 2019; 23:2331216519825930. [PMID: 30755108 PMCID: PMC6378641 DOI: 10.1177/2331216519825930] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
It has been suggested that the most important factor for obtaining high speech intelligibility in noise with cochlear implant (CI) recipients is to preserve the low-frequency amplitude modulations of speech across time and frequency by, for example, minimizing the amount of noise in the gaps between speech segments. In contrast, it has also been argued that the transient parts of the speech signal, such as speech onsets, provide the most important information for speech intelligibility. The present study investigated the relative impact of these two factors on the potential benefit of noise reduction for CI recipients by systematically introducing noise estimation errors within speech segments, speech gaps, and the transitions between them. The introduction of these noise estimation errors directly induces errors in the noise reduction gains within each of these regions. Speech intelligibility in both stationary and modulated noise was then measured using a CI simulation tested on normal-hearing listeners. The results suggest that minimizing noise in the speech gaps can improve intelligibility, at least in modulated noise. However, significantly larger improvements were obtained when both the noise in the gaps was minimized and the speech transients were preserved. These results imply that the ability to identify the boundaries between speech segments and speech gaps may be one of the most important factors for a noise reduction algorithm because knowing the boundaries makes it possible to minimize the noise in the gaps as well as enhance the low-frequency amplitude modulations of the speech.
Collapse
Affiliation(s)
- Abigail A Kressner
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| | - Tobias May
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| | - Torsten Dau
- 1 Hearing Systems, Department of Health Technology, Technical University of Denmark, Denmark
| |
Collapse
|
15
|
de Graaff F, Huysmans E, Merkus P, Theo Goverts S, Smits C. Assessment of speech recognition abilities in quiet and in noise: a comparison between self-administered home testing and testing in the clinic for adult cochlear implant users. Int J Audiol 2018; 57:872-880. [DOI: 10.1080/14992027.2018.1506168] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Feike de Graaff
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, Netherlands
| | - Elke Huysmans
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, Netherlands
| | - Paul Merkus
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, Netherlands
| | - S. Theo Goverts
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, Netherlands
| | - Cas Smits
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology - Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health research institute, De Boelelaan 1117, Amsterdam, Netherlands
| |
Collapse
|
16
|
Pitchaimuthu A, Arora A, Bhat JS, Kanagokar V. Effect of systematic desensitization training on acceptable noise levels in adults with normal hearing sensitivity. Noise Health 2018; 20:83-89. [PMID: 29785973 PMCID: PMC5965005 DOI: 10.4103/nah.nah_58_17] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022] Open
Abstract
Context: The willingness of a person to accept noise while listening to speech can be measured using the acceptable noise level (ANL) test. Individuals with poor ANL are unlikely to become successful hearing aid users. Hence, it is important to enhance the individual’s ability to accept noise levels. The current study was an attempt to investigate whether systematic desensitization training can improve the ANL in individuals having high ANL. Aims: To investigate the effect of systematic desensitization training on ANLs in individuals with normal hearing sensitivity. Settings and Design: Observational study design. Materials and Methods: Thirty-eight normally hearing adults within the age range of 18–25 years participated in the study. Initially, baseline ANL was measured for all participants. Based on the baseline ANL, participants were categorized into three groups; low ANL, mid ANL, and high ANL. The participants with high ANL were trained using systematic desensitization procedure whereas, individuals with low and mid ANL did not undergo any training and served as the comparison groups. After the training period, ANL was measured again for all the participants. Statistical Analysis Used: Repeated measures of analysis of variance with follow up paired "t" test. Results: Analysis revealed a significant main effect of systematic desensitization training on ANL. There was a significant improvement in ANL in participants with high ANL. However, there was no significant difference in ANL between baseline and follow-up session in individuals with low and mid ANL. Conclusions: Systematic desensitization training can facilitate ANL, thereby enhancing the individual’s ability to accept the noise levels. This enhanced ANL can facilitate better hearing aid fitting and acceptance.
Collapse
Affiliation(s)
- Arivudainambi Pitchaimuthu
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| | - Anshul Arora
- Advanced Behavioural Learning Environment (ABLE UK), Dubai Healthcare City, Dubai, UAE
| | - Jayashree S Bhat
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| | - Vibha Kanagokar
- Department of Audiology and Speech Language Pathology, Kasturba Medical College, Manipal Academy of Higher Education, Mangalore, Karnataka, India
| |
Collapse
|
17
|
Tai Y, Husain FT. Right-Ear Advantage for Speech-in-Noise Recognition in Patients with Nonlateralized Tinnitus and Normal Hearing Sensitivity. J Assoc Res Otolaryngol 2018; 19:211-221. [PMID: 29181615 PMCID: PMC5878148 DOI: 10.1007/s10162-017-0647-3] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2017] [Accepted: 11/05/2017] [Indexed: 11/29/2022] Open
Abstract
Despite having normal hearing sensitivity, patients with chronic tinnitus may experience more difficulty recognizing speech in adverse listening conditions as compared to controls. However, the association between the characteristics of tinnitus (severity and loudness) and speech recognition remains unclear. In this study, the Quick Speech-in-Noise test (QuickSIN) was conducted monaurally on 14 patients with bilateral tinnitus and 14 age- and hearing-matched adults to determine the relation between tinnitus characteristics and speech understanding. Further, Tinnitus Handicap Inventory (THI), tinnitus loudness magnitude estimation, and loudness matching were obtained to better characterize the perceptual and psychological aspects of tinnitus. The patients reported low THI scores, with most participants in the slight handicap category. Significant between-group differences in speech-in-noise performance were only found at the 5-dB signal-to-noise ratio (SNR) condition. The tinnitus group performed significantly worse in the left ear than in the right ear, even though bilateral tinnitus percept and symmetrical thresholds were reported in all patients. This between-ear difference is likely influenced by a right-ear advantage for speech sounds, as factors related to testing order and fatigue were ruled out. Additionally, significant correlations found between SNR loss in the left ear and tinnitus loudness matching suggest that perceptual factors related to tinnitus had an effect on speech-in-noise performance, pointing to a possible interaction between peripheral and cognitive factors in chronic tinnitus. Further studies, that take into account both hearing and cognitive abilities of patients, are needed to better parse out the effect of tinnitus in the absence of hearing impairment.
Collapse
Affiliation(s)
- Yihsin Tai
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. Sixth Street, Champaign, IL, 61820, USA.
| | - Fatima T Husain
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, 901 S. Sixth Street, Champaign, IL, 61820, USA.
- Neuroscience Program, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
| |
Collapse
|
18
|
Abstract
BACKGROUND The impact of hearing loss on the ability to participate in verbal communication can be directly quantified through the use of speech audiometry. Advances in technology and the associated reduction in background noise interference for hearing aids have allowed the reproduction of very complex acoustic environments, analogous to those in which conversations occur in daily life. These capabilities have led to the creation of numerous advanced speech audiometry measures, test procedures and environments, far beyond the presentation of isolated words in an otherwise noise-free testing booth. OBJECTIVE The aim of this study was to develop a set of systematic criteria for the appropriate selection of speech audiometric material, which are presented in this article in relationship to the most widely used test procedures. RESULTS Before an appropriate speech test can be selected from the numerous procedures available, the precise aims of the evaluation should be basically defined. Specific test characteristics, such as validity, objectivity, reliability and sensitivity are important for the selection of the correct test for the specific goals. CONCLUSION A concrete understanding of the goals of the evaluation as well as of specific test criteria play a crucial role in the selection of speech audiometry testing procedures.
Collapse
|
19
|
Communicating in Challenging Environments: Noise and Reverberation. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_8] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
20
|
Zaballos MTP, Plasencia DP, González MLZ, de Miguel AR, Macías ÁR. Air traffic controllers' long-term speech-in-noise training effects: A control group study. Noise Health 2016; 18:376-381. [PMID: 27991470 PMCID: PMC5227019 DOI: 10.4103/1463-1741.195804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
Introduction: Speech perception in noise relies on the capacity of the auditory system to process complex sounds using sensory and cognitive skills. The possibility that these can be trained during adulthood is of special interest in auditory disorders, where speech in noise perception becomes compromised. Air traffic controllers (ATC) are constantly exposed to radio communication, a situation that seems to produce auditory learning. The objective of this study has been to quantify this effect. Subjects and Methods: 19 ATC and 19 normal hearing individuals underwent a speech in noise test with three signal to noise ratios: 5, 0 and −5 dB. Noise and speech were presented through two different loudspeakers in azimuth position. Speech tokes were presented at 65 dB SPL, while white noise files were at 60, 65 and 70 dB respectively. Results: Air traffic controllers outperform the control group in all conditions [P<0.05 in ANOVA and Mann-Whitney U tests]. Group differences were largest in the most difficult condition, SNR=−5 dB. However, no correlation between experience and performance were found for any of the conditions tested. The reason might be that ceiling performance is achieved much faster than the minimum experience time recorded, 5 years, although intrinsic cognitive abilities cannot be disregarded. Discussion: ATC demonstrated enhanced ability to hear speech in challenging listening environments. This study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions, although good cognitive qualities are likely to be a basic requirement for this training to be effective. Conclusion: Our results show that ATC outperform the control group in all conditions. Thus, this study provides evidence that long-term auditory training is indeed useful in achieving better speech-in-noise understanding even in adverse conditions.
Collapse
Affiliation(s)
- Maria T P Zaballos
- Laboratorio de Psicoacústica, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Daniel P Plasencia
- ENT Department & Departamento de CC Quirúrgicas, Universidad de Las Palmas de Gran Canaria, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - María L Z González
- ENT Department, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Angel R de Miguel
- Instituto Universitario de Sistemas Inteligentes y Aplicaciones Numéricas, Universidad de Las Palmas de Gran Canaria, Las Palmas de Gran Canaria, Las Palmas, Spain
| | - Ángel R Macías
- ENT Department & Departamento de CC Quirúrgicas, Universidad de Las Palmas de Gran Canaria, Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Las Palmas, Spain
| |
Collapse
|
21
|
|
22
|
Masked sentence recognition assessed at ascending target-to-masker ratios: modest effects of repeating stimuli. Ear Hear 2016; 36:e14-22. [PMID: 25329373 DOI: 10.1097/aud.0000000000000113] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Masked sentence recognition is typically evaluated by presenting a novel stimulus on each trial. As a consequence, experiments calling for replicate estimates in multiple conditions require large corpora of stimuli. The present study evaluated the consequences of repeating sentence-plus-masker pairs at ascending target-to-masker ratios (TMRs). The hypothesis was that performance on each trial would be consistent with the cues available to the listener at the associated TMR, resulting in similar estimates of threshold and slope for procedures using novel versus repeated sentences within an ascending-TMR block of trials. DESIGN A group of 37 normal-hearing young adults participated. Each listener was tested in the presence of one of three maskers: a multitalker babble, a speech-shaped noise, or an amplitude-modulated speech-shaped noise. There were two data collection procedures, both proceeding in blocks of trials with ascending TMRs. The novel-stimulus procedure used five lists of AzBio sentences, one presented at each of five TMRs, with a novel sentence and masker sample on each trial. The repeated-stimulus procedure used a single list of AzBio sentences, with each sentence presented at multiple TMRs, progressing from low to high; each sentence was paired with a single masker sample, such that only the TMR changed within blocks of repeated stimuli. Listeners completed one run with the novel-stimulus procedure and five runs with the repeated-stimulus procedure. The resulting values of percent correct at each TMR were fitted with a logit function to estimate threshold and psychometric function slope. RESULTS The novel- and repeated-stimulus procedures resulted in generally similar data patterns. After controlling for effects related to the order in which listeners completed the six data collection runs, mean thresholds were slightly higher (<0.5 dB) for the repeated-stimulus procedure than the novel-stimulus procedure in all three maskers. Function slopes for the multitalker babble and amplitude-modulated noise maskers were slightly shallower using the repeated-stimulus than the novel-stimulus procedure, but slopes were comparable for the speech-shaped noise. The quality of psychometric function fits was significantly better for the repeated-stimulus than the novel-stimulus procedure, even when comparing a single run of the repeated-stimulus procedure (using one list) to a run of the novel-stimulus procedure (using five lists). CONCLUSIONS Repeating sentences at ascending TMRs is an efficient method for estimating thresholds and psychometric function slopes, both in terms of the number of sentences and the number of trials.
Collapse
|
23
|
Speech perception in older hearing impaired listeners: benefits of perceptual training. PLoS One 2015; 10:e0113965. [PMID: 25730330 PMCID: PMC4346400 DOI: 10.1371/journal.pone.0113965] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2014] [Accepted: 10/31/2014] [Indexed: 11/19/2022] Open
Abstract
Hearing aids (HAs) only partially restore the ability of older hearing impaired (OHI) listeners to understand speech in noise, due in large part to persistent deficits in consonant identification. Here, we investigated whether adaptive perceptual training would improve consonant-identification in noise in sixteen aided OHI listeners who underwent 40 hours of computer-based training in their homes. Listeners identified 20 onset and 20 coda consonants in 9,600 consonant-vowel-consonant (CVC) syllables containing different vowels (/ɑ/, /i/, or /u/) and spoken by four different talkers. Consonants were presented at three consonant-specific signal-to-noise ratios (SNRs) spanning a 12 dB range. Noise levels were adjusted over training sessions based on d’ measures. Listeners were tested before and after training to measure (1) changes in consonant-identification thresholds using syllables spoken by familiar and unfamiliar talkers, and (2) sentence reception thresholds (SeRTs) using two different sentence tests. Consonant-identification thresholds improved gradually during training. Laboratory tests of d’ thresholds showed an average improvement of 9.1 dB, with 94% of listeners showing statistically significant training benefit. Training normalized consonant confusions and improved the thresholds of some consonants into the normal range. Benefits were equivalent for onset and coda consonants, syllables containing different vowels, and syllables presented at different SNRs. Greater training benefits were found for hard-to-identify consonants and for consonants spoken by familiar than unfamiliar talkers. SeRTs, tested with simple sentences, showed less elevation than consonant-identification thresholds prior to training and failed to show significant training benefit, although SeRT improvements did correlate with improvements in consonant thresholds. We argue that the lack of SeRT improvement reflects the dominant role of top-down semantic processing in processing simple sentences and that greater transfer of benefit would be evident in the comprehension of more unpredictable speech material.
Collapse
|
24
|
Hiselius P, Edvall N, Reimers D. To measure the impact of hearing protectors on the perception of speech in noise. Int J Audiol 2014; 54 Suppl 1:S3-8. [PMID: 25549165 DOI: 10.3109/14992027.2014.973539] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To propose and evaluate a new method for assessing the potential impact on speech intelligibility when wearing a hearing protection device (HPD) in a noisy environment. DESIGN The method is based on a self adaptive method for finding the speech reception threshold (SRT) using speech material from the Callsign acquisition test (CAT) presented at a constant level while adjusting the level of a background noise. A key point is to primarily examine the impact of the HPD; i.e. the difference between occluded and unoccluded SRTs, presented as the speech intelligibility impact level. STUDY SAMPLE A total of 31 test subjects. RESULTS The method is shown to be stable, with a minimum amount of learning effect, and capable of detecting differences between hearing protection devices. It is also shown that low-attenuation passive HPDs are likely to have a very small effect on speech intelligibility in noise, and that an electronic HPD with a level-dependant function has the potential to improve intelligibility. CONCLUSIONS The results are encouraging regarding the precision, repeatability, and applicability of the proposed method.
Collapse
Affiliation(s)
- Per Hiselius
- * 3M Personal Safety Division, 3M Svenska AB , Box, Värnamo , Sweden
| | | | | |
Collapse
|
25
|
Zhu S, Wong LLN, Chen F. Development and validation of a new Mandarin tone identification test. Int J Pediatr Otorhinolaryngol 2014; 78:2174-82. [PMID: 25455525 DOI: 10.1016/j.ijporl.2014.10.004] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/12/2014] [Revised: 09/30/2014] [Accepted: 10/04/2014] [Indexed: 10/24/2022]
Abstract
OBJECTIVES The objectives of this study were to develop a new Mandarin tone identification test (MTIT) to assess the Mandarin tone identification ability of children with hearing impairment (HI) and at age around 7 years; and to evaluate the reliability and sensitivity of the MTIT. METHODS The word materials to be used in the MTIT were developed in Phase I. Monosyllables were chosen to represent the daily repertoire of young children and to avoid the influence of co-articulation and intonation. Each test stimulus set contained four words, with one target, one containing contrastive tone, and two unrelated distracters. All words were depicted using simple pictures, and the test targets in quiet or in noise were presented using recorded stimuli on a custom software. Phase II evaluated the reliability and sensitivity of the MTIT. Participants were 50 normal-hearing native-Mandarin speakers around 7 years of age. RESULTS In Phase I, the MTIT was developed as described above. The final test consists of 51 words that are within the vocabulary repertoire of children aged 7 years. In Phase II, with the Mandarin tone identification scores collected from 50 children, the repeated measure ANOVA showed a significant main effect of S/N on MTIT performance (p<0.001). Pairwise comparisons revealed a significant difference in performance across the five S/N conditions (p<0.01) when S/N varied from -30 to -10dB. Cronbach's alpha at -15dB S/N was 0.66, suggesting satisfactory internal consistency reliability. A paired-samples t-test showed that there was no significant difference between the test-retest scores across the five S/N conditions (p>0.05). CONCLUSIONS Compared with the available Mandarin tone identification tools, MTIT systematically evaluated the tone identification performance in noisy environment for normal hearing children at age around 7 years. Results also showed satisfactory internal consistency reliability, good test-retest reliability and good sensitivity. In the near future, MTIT could be used to evaluate tone perception ability of children with hearing impairment and help to design hearing rehabilitation strategies for this population at the age critical for their language learning.
Collapse
Affiliation(s)
- Shufeng Zhu
- Division of Speech and Hearing Sciences, the University of Hong Kong, Hong Kong.
| | - Lena L N Wong
- Division of Speech and Hearing Sciences, the University of Hong Kong, Hong Kong
| | - Fei Chen
- Division of Speech and Hearing Sciences, the University of Hong Kong, Hong Kong
| |
Collapse
|
26
|
Hey M, Hocke T, Hedderich J, Müller-Deile J. Investigation of a matrix sentence test in noise: Reproducibility and discrimination function in cochlear implant patients. Int J Audiol 2014; 53:895-902. [DOI: 10.3109/14992027.2014.938368] [Citation(s) in RCA: 48] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
27
|
Stuart A, Butler AK. No learning effect observed for reception thresholds for sentences in noise. Am J Audiol 2014; 23:227-31. [PMID: 24700076 DOI: 10.1044/2014_aja-14-0005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study examined reception thresholds for sentences (RTSs) as a function of test session (N = 5) and noise (continuous and interrupted) in normal-hearing adults. It was hypothesized that RTSs would be superior in interrupted noise and would be stable across repeated testing. METHOD Twenty-five normal-hearing adults participated. RTSs were determined with Hearing in Noise Test sentences in continuous and interrupted noise presented at 65 dBA. An adaptive technique was used where sentences varied in intensity to converge on a level of 50% of correct performance. Sentence lists were counterbalanced with 5 unique lists in both continuous and interrupted noise. RESULTS RTS signal-to-noise ratios were significantly better in the interrupted noise (p < .0001). There was no effect of test session (p = .12) or a Test Session × Noise interaction (p = .13). CONCLUSIONS Stable RTS signal-to-noise ratios across test sessions in both noises are consistent with the notion that a learning effect was not present in noise. Further, one may conclude that Hearing in Noise Test sentences provide stable measures of sentence recognition thresholds in normal-hearing adults over time so long as sentences are unique or are not repeated.
Collapse
|
28
|
Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence. PLoS One 2013; 8:e62836. [PMID: 23675431 PMCID: PMC3651281 DOI: 10.1371/journal.pone.0062836] [Citation(s) in RCA: 150] [Impact Index Per Article: 13.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2012] [Accepted: 03/26/2013] [Indexed: 02/06/2023] Open
Abstract
Background Auditory training involves active listening to auditory stimuli and aims to improve performance in auditory tasks. As such, auditory training is a potential intervention for the management of people with hearing loss. Objective This systematic review (PROSPERO 2011: CRD42011001406) evaluated the published evidence-base for the efficacy of individual computer-based auditory training to improve speech intelligibility, cognition and communication abilities in adults with hearing loss, with or without hearing aids or cochlear implants. Methods A systematic search of eight databases and key journals identified 229 articles published since 1996, 13 of which met the inclusion criteria. Data were independently extracted and reviewed by the two authors. Study quality was assessed using ten pre-defined scientific and intervention-specific measures. Results Auditory training resulted in improved performance for trained tasks in 9/10 articles that reported on-task outcomes. Although significant generalisation of learning was shown to untrained measures of speech intelligibility (11/13 articles), cognition (1/1 articles) and self-reported hearing abilities (1/2 articles), improvements were small and not robust. Where reported, compliance with computer-based auditory training was high, and retention of learning was shown at post-training follow-ups. Published evidence was of very-low to moderate study quality. Conclusions Our findings demonstrate that published evidence for the efficacy of individual computer-based auditory training for adults with hearing loss is not robust and therefore cannot be reliably used to guide intervention at this time. We identify a need for high-quality evidence to further examine the efficacy of computer-based auditory training for people with hearing loss.
Collapse
|
29
|
Smits C, Theo Goverts S, Festen JM. The digits-in-noise test: assessing auditory speech recognition abilities in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:1693-706. [PMID: 23464039 DOI: 10.1121/1.4789933] [Citation(s) in RCA: 135] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
A speech-in-noise test which uses digit triplets in steady-state speech noise was developed. The test measures primarily the auditory, or bottom-up, speech recognition abilities in noise. Digit triplets were formed by concatenating single digits spoken by a male speaker. Level corrections were made to individual digits to create a set of homogeneous digit triplets with steep speech recognition functions. The test measures the speech reception threshold (SRT) in long-term average speech-spectrum noise via a 1-up, 1-down adaptive procedure with a measurement error of 0.7 dB. One training list is needed for naive listeners. No further learning effects were observed in 24 subsequent SRT measurements. The test was validated by comparing results on the test with results on the standard sentences-in-noise test. To avoid the confounding of hearing loss, age, and linguistic skills, these measurements were performed in normal-hearing subjects with simulated hearing loss. The signals were spectrally smeared and/or low-pass filtered at varying cutoff frequencies. After correction for measurement error the correlation coefficient between SRTs measured with both tests equaled 0.96. Finally, the feasibility of the test was approved in a study where reference SRT values were gathered in a representative set of 1386 listeners over 60 years of age.
Collapse
Affiliation(s)
- Cas Smits
- Department of Otolaryngology/Audiology and EMGO Institute for Health and Care Research, VU University Medical Center, Amsterdam, The Netherlands.
| | | | | |
Collapse
|
30
|
Calandruccio L, Smiljanic R. New sentence recognition materials developed using a basic non-native English lexicon. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2012; 55:1342-55. [PMID: 22411279 DOI: 10.1044/1092-4388(2012/11-0260)] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
PURPOSE The objective of this project was to develop new sentence test materials drawing on a basic non-native English lexicon that could be used to test speech recognition for various listener populations. These materials have been designed to provide a test tool that is less linguistically biased, compared with materials that are currently available, for sentence recognition for non-native as well as native speakers of English. METHOD One hundred non-native speakers of English were interviewed on a range of 20 conversational topics. Over 26 hr of recorded non-native English speech were transcribed. These transcriptions were used to create a lexicon of over 4,000 unique words. The words from this lexicon were used to create the new materials based on a simple syntactic sentence structure frame. RESULTS Twenty lists of 25 sentences were developed. Each sentence has 4 keywords, providing 100 keywords per list. Lists were equated for rate of occurrence of keywords in lexicon, high-frequency count (total number of affricates and fricatives), number of syllables, and distribution of syntactic structure. Listening-in-noise results for native-English-speaking, normal-hearing listeners indicated similar performance across lists. CONCLUSION The Basic English Lexicon materials provide a large set of sentences for native and non-native English speech-recognition testing.
Collapse
|
31
|
Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cereb Cortex 2011; 22:1180-90. [PMID: 21799207 DOI: 10.1093/cercor/bhr196] [Citation(s) in RCA: 143] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
We investigated training-related improvements in listening in noise and the biological mechanisms mediating these improvements. Training-related malleability was examined using a program that incorporates cognitively based listening exercises to improve speech-in-noise perception. Before and after training, auditory brainstem responses to a speech syllable were recorded in quiet and multitalker noise from adults who ranged in their speech-in-noise perceptual ability. Controls did not undergo training but were tested at intervals equivalent to the trained subjects. Trained subjects exhibited significant improvements in speech-in-noise perception that were retained 6 months later. Subcortical responses in noise demonstrated training-related enhancements in the encoding of pitch-related cues (the fundamental frequency and the second harmonic), particularly for the time-varying portion of the syllable that is most vulnerable to perceptual disruption (the formant transition region). Subjects with the largest strength of pitch encoding at pretest showed the greatest perceptual improvement. Controls exhibited neither neurophysiological nor perceptual changes. We provide the first demonstration that short-term training can improve the neural representation of cues important for speech-in-noise perception. These results implicate and delineate biological mechanisms contributing to learning success, and they provide a conceptual advance to our understanding of the kind of training experiences that can influence sensory processing in adulthood.
Collapse
Affiliation(s)
- Judy H Song
- Auditory Neuroscience Laboratory, Northwestern University, 2240 Campus Drive, Evanston, IL 60208, USA
| | | | | | | |
Collapse
|