1
|
Hidalgo C, Zielinski C, Chen S, Roman S, Truy E, Schön D. Similar gaze behaviour during dialogue perception in congenitally deaf children with cochlear Implants and normal hearing children. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2024. [PMID: 39073184 DOI: 10.1111/1460-6984.13094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 07/03/2024] [Indexed: 07/30/2024]
Abstract
BACKGROUND Perceptual and speech production abilities of children with cochlear implants (CIs) are usually tested by word and sentence repetition or naming tests. However, these tests are quite far apart from daily life linguistic contexts. AIM Here, we describe a way of investigating the link between language comprehension and anticipatory verbal behaviour promoting the use of more complex listening situations. METHODS AND PROCEDURE The setup consists in watching the audio-visual dialogue of two actors. Children's gaze switches from one speaker to the other serve as a proxy of their prediction abilities. Moreover, to better understand the basis and the impact of anticipatory behaviour, we also measured children's ability to understand the dialogue content, their speech perception and memory skills as well as their rhythmic skills, that also require temporal predictions. Importantly, we compared children with CI performances with those of an age-matched group of children with normal hearing (NH). OUTCOMES AND RESULTS While children with CI revealed poorer speech perception and verbal working memory abilities than NH children, there was no difference in gaze anticipatory behaviour. Interestingly, in children with CI only, we found a significant correlation between dialogue comprehension, perceptual skills and gaze anticipatory behaviour. CONCLUSION Our results extend to a dialogue context of previous findings showing an absence of predictive deficits in children with CI. The current design seems an interesting avenue to provide an accurate and objective estimate of anticipatory language behaviour in a more ecological linguistic context also with young children. WHAT THIS PAPER ADDS What is already known on the subject Children with cochlear implants seem to have difficulties extracting structure from and learning sequential input patterns, possibly due to signal degradation and auditory deprivation in the first years of life. They also seem to have a reduced use of contextual information and slow language processing among children with hearing loss. What this paper adds to existing knowledge Here we show that when adopting a rather complex linguistic context such as watching a dialogue of two individuals, children with cochlear implants are able to use the speech and language structure to anticipate gaze switches to the upcoming speaker. What are the clinical implications of this work? The present design seems an interesting avenue to provide an accurate and objective estimate of anticipatory behaviour in a more ecological and dynamic linguistic context. Importantly, this measure is implicit and it has been previously used with very young (normal-hearing) children, showing that they spontaneously make anticipatory gaze switches by age two. Thus, this approach may be of interest to refine the speech comprehension assessment at a rather early age after cochlear implantation where explicit behavioural tests are not always reliable and sensitive.
Collapse
Affiliation(s)
- Céline Hidalgo
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Christelle Zielinski
- Aix-Marseille Univ, Institute of Language, Communication and the Brain, Marseille, France
| | - Sophie Chen
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Stéphane Roman
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- Pediatric Otolaryngology Department, La Timone Children's Hospital (APHM), Marseille, France
| | - Eric Truy
- Service d'ORL et de Chirurgie cervico-faciale, Hôpital Edouard Herriot, CHU, LYON, France
- Inserm U1028, Lyon Neuroscience Research Center, Equipe IMPACT, Lyon, France
- CNRS UMR5292, Lyon Neuroscience Research Center, Equipe IMPACT, Lyon, France
- University Lyon 1, Lyon, France
| | - Daniele Schön
- Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- Aix-Marseille Univ, Institute of Language, Communication and the Brain, Marseille, France
| |
Collapse
|
2
|
Bieber RE, Makashay MJ, Sheffield BM, Brungart DS. Intelligibility of Natively and Nonnatively Produced English Speech Presented in Noise to a Large Cohort of United States Service Members. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2454-2472. [PMID: 38950169 DOI: 10.1044/2024_jslhr-23-00312] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 07/03/2024]
Abstract
PURPOSE A corpus of English matrix sentences produced by 60 native and nonnative speakers of English was developed as part of a multinational coalition task group. This corpus was tested on a large cohort of U.S. Service members in order to examine the effects of talker nativeness, listener nativeness, masker type, and hearing sensitivity on speech recognition performance in this population. METHOD A total of 1,939 U.S. Service members (ages 18-68 years) completed this closed-set listening task, including 430 women and 110 nonnative English speakers. Stimuli were produced by native and nonnative speakers of English and were presented in speech-shaped noise and multitalker babble. Keyword recognition accuracy and response times were analyzed. RESULTS General(ized) linear mixed-effects regression models found that, on the whole, speech recognition performance was lower for listeners who identified as nonnative speakers of English and when listening to speech produced by nonnative speakers of English. Talker and listener effects were more pronounced when listening in a babble masker than in a speech-shaped noise masker. Response times varied as a function of recognition score, with longest response times found for intermediate levels of performance. CONCLUSIONS This study found additive effects of talker and listener nonnativeness when listening to speech in background noise. These effects were present in both accuracy and response time measures. No multiplicative effects of talker and listener language background were found. There was little evidence of a negative interaction between talker nonnativeness and hearing impairment, suggesting that these factors may have redundant effects on speech recognition. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.26060191.
Collapse
Affiliation(s)
- Rebecca E Bieber
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M. Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Matthew J Makashay
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Hearing Conservation and Readiness Branch, Defense Centers for Public Health - Aberdeen, Aberdeen Proving Ground, MD
| | - Benjamin M Sheffield
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Hearing Conservation and Readiness Branch, Defense Centers for Public Health - Aberdeen, Aberdeen Proving Ground, MD
| | - Douglas S Brungart
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| |
Collapse
|
3
|
Mertel K, Dimitrijevic A, Thaut M. Can Music Enhance Working Memory and Speech in Noise Perception in Cochlear Implant Users? Design Protocol for a Randomized Controlled Behavioral and Electrophysiological Study. Audiol Res 2024; 14:611-624. [PMID: 39051196 PMCID: PMC11270222 DOI: 10.3390/audiolres14040052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/18/2024] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND A cochlear implant (CI) enables deaf people to understand speech but due to technical restrictions, users face great limitations in noisy conditions. Music training has been shown to augment shared auditory and cognitive neural networks for processing speech and music and to improve auditory-motor coupling, which benefits speech perception in noisy listening conditions. These are promising prerequisites for studying multi-modal neurologic music training (NMT) for speech-in-noise (SIN) perception in adult cochlear implant (CI) users. Furthermore, a better understanding of the neurophysiological correlates when performing working memory (WM) and SIN tasks after multi-modal music training with CI users may provide clinicians with a better understanding of optimal rehabilitation. METHODS Within 3 months, 81 post-lingual deafened adult CI recipients will undergo electrophysiological recordings and a four-week neurologic music therapy multi-modal training randomly assigned to one of three training focusses (pitch, rhythm, and timbre). Pre- and post-tests will analyze behavioral outcomes and apply a novel electrophysiological measurement approach that includes neural tracking to speech and alpha oscillation modulations to the sentence-final-word-identification-and-recall test (SWIR-EEG). Expected outcome: Short-term multi-modal music training will enhance WM and SIN performance in post-lingual deafened adult CI recipients and will be reflected in greater neural tracking and alpha oscillation modulations in prefrontal areas. Prospectively, outcomes could contribute to understanding the relationship between cognitive functioning and SIN besides the technical deficits of the CI. Targeted clinical application of music training for post-lingual deafened adult CI carriers to significantly improve SIN and positively impact the quality of life can be realized.
Collapse
Affiliation(s)
- Kathrin Mertel
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| | - Andrew Dimitrijevic
- Sunnybrook Cochlear Implant Program, Sunnybrook Hospital, Toronto, ON M4N 3M5, Canada;
| | - Michael Thaut
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| |
Collapse
|
4
|
Tanveer MA, Skoglund MA, Bernhardsson B, Alickovic E. Deep learning-based auditory attention decoding in listeners with hearing impairment . J Neural Eng 2024; 21:036022. [PMID: 38729132 DOI: 10.1088/1741-2552/ad49d7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2023] [Accepted: 05/10/2024] [Indexed: 05/12/2024]
Abstract
Objective.This study develops a deep learning (DL) method for fast auditory attention decoding (AAD) using electroencephalography (EEG) from listeners with hearing impairment (HI). It addresses three classification tasks: differentiating noise from speech-in-noise, classifying the direction of attended speech (left vs. right) and identifying the activation status of hearing aid noise reduction algorithms (OFF vs. ON). These tasks contribute to our understanding of how hearing technology influences auditory processing in the hearing-impaired population.Approach.Deep convolutional neural network (DCNN) models were designed for each task. Two training strategies were employed to clarify the impact of data splitting on AAD tasks: inter-trial, where the testing set used classification windows from trials that the training set had not seen, and intra-trial, where the testing set used unseen classification windows from trials where other segments were seen during training. The models were evaluated on EEG data from 31 participants with HI, listening to competing talkers amidst background noise.Main results.Using 1 s classification windows, DCNN models achieve accuracy (ACC) of 69.8%, 73.3% and 82.9% and area-under-curve (AUC) of 77.2%, 80.6% and 92.1% for the three tasks respectively on inter-trial strategy. In the intra-trial strategy, they achieved ACC of 87.9%, 80.1% and 97.5%, along with AUC of 94.6%, 89.1%, and 99.8%. Our DCNN models show good performance on short 1 s EEG samples, making them suitable for real-world applications. Conclusion: Our DCNN models successfully addressed three tasks with short 1 s EEG windows from participants with HI, showcasing their potential. While the inter-trial strategy demonstrated promise for assessing AAD, the intra-trial approach yielded inflated results, underscoring the important role of proper data splitting in EEG-based AAD tasks.Significance.Our findings showcase the promising potential of EEG-based tools for assessing auditory attention in clinical contexts and advancing hearing technology, while also promoting further exploration of alternative DL architectures and their potential constraints.
Collapse
Affiliation(s)
- M Asjid Tanveer
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Martin A Skoglund
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linkoping, Sweden
| | - Bo Bernhardsson
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Electrical Engineering, Linköping University, Linkoping, Sweden
| |
Collapse
|
5
|
Hendrikse MME, Dingemanse G, Goedegebure A. On the Feasibility of Using Behavioral Listening Effort Test Methods to Evaluate Auditory Performance in Cochlear Implant Users. Trends Hear 2024; 28:23312165241240572. [PMID: 38676325 PMCID: PMC11055488 DOI: 10.1177/23312165241240572] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Revised: 03/01/2024] [Accepted: 03/04/2024] [Indexed: 04/28/2024] Open
Abstract
Realistic outcome measures that reflect everyday hearing challenges are needed to assess hearing aid and cochlear implant (CI) fitting. Literature suggests that listening effort measures may be more sensitive to differences between hearing-device settings than established speech intelligibility measures when speech intelligibility is near maximum. Which method provides the most effective measurement of listening effort for this purpose is currently unclear. This study aimed to investigate the feasibility of two tests for measuring changes in listening effort in CI users due to signal-to-noise ratio (SNR) differences, as would arise from different hearing-device settings. By comparing the effect size of SNR differences on listening effort measures with test-retest differences, the study evaluated the suitability of these tests for clinical use. Nineteen CI users underwent two listening effort tests at two SNRs (+4 and +8 dB relative to individuals' 50% speech perception threshold). We employed dual-task paradigms-a sentence-final word identification and recall test (SWIRT) and a sentence verification test (SVT)-to assess listening effort at these two SNRs. Our results show a significant difference in listening effort between the SNRs for both test methods, although the effect size was comparable to the test-retest difference, and the sensitivity was not superior to speech intelligibility measures. Thus, the implementations of SVT and SWIRT used in this study are not suitable for clinical use to measure listening effort differences of this magnitude in individual CI users. However, they can be used in research involving CI users to analyze group data.
Collapse
Affiliation(s)
- Maartje M. E. Hendrikse
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - Gertjan Dingemanse
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus MC University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
6
|
Bachmann FL, Kulasingham JP, Eskelund K, Enqvist M, Alickovic E, Innes-Brown H. Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. Trends Hear 2024; 28:23312165241246596. [PMID: 38738341 DOI: 10.1177/23312165241246596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
Collapse
Affiliation(s)
| | - Joshua P Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
7
|
Wilroth J, Bernhardsson B, Heskebeck F, Skoglund MA, Bergeling C, Alickovic E. Improving EEG-based decoding of the locus of auditory attention through domain adaptation . J Neural Eng 2023; 20:066022. [PMID: 37988748 DOI: 10.1088/1741-2552/ad0e7b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2022] [Accepted: 11/21/2023] [Indexed: 11/23/2023]
Abstract
Objective.This paper presents a novel domain adaptation (DA) framework to enhance the accuracy of electroencephalography (EEG)-based auditory attention classification, specifically for classifying the direction (left or right) of attended speech. The framework aims to improve the performances for subjects with initially low classification accuracy, overcoming challenges posed by instrumental and human factors. Limited dataset size, variations in EEG data quality due to factors such as noise, electrode misplacement or subjects, and the need for generalization across different trials, conditions and subjects necessitate the use of DA methods. By leveraging DA methods, the framework can learn from one EEG dataset and adapt to another, potentially resulting in more reliable and robust classification models.Approach.This paper focuses on investigating a DA method, based on parallel transport, for addressing the auditory attention classification problem. The EEG data utilized in this study originates from an experiment where subjects were instructed to selectively attend to one of the two spatially separated voices presented simultaneously.Main results.Significant improvement in classification accuracy was observed when poor data from one subject was transported to the domain of good data from different subjects, as compared to the baseline. The mean classification accuracy for subjects with poor data increased from 45.84% to 67.92%. Specifically, the highest achieved classification accuracy from one subject reached 83.33%, a substantial increase from the baseline accuracy of 43.33%.Significance.The findings of our study demonstrate the improved classification performances achieved through the implementation of DA methods. This brings us a step closer to leveraging EEG in neuro-steered hearing devices.
Collapse
Affiliation(s)
- Johanna Wilroth
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
| | - Bo Bernhardsson
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Frida Heskebeck
- Department of Automatic Control, Lund University, Lund, Sweden
| | - Martin A Skoglund
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Carolina Bergeling
- Department of Mathematics and Natural Sciences, Blekinge Institute of Technology, Karlskrona, Sweden
| | - Emina Alickovic
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| |
Collapse
|
8
|
Ryan DB, Eckert MA, Sellers EW, Schairer KS, McBee MT, Ridley EA, Smith SL. Performance Monitoring and Cognitive Inhibition during a Speech-in-Noise Task in Older Listeners. Semin Hear 2023; 44:124-139. [PMID: 37122879 PMCID: PMC10147504 DOI: 10.1055/s-0043-1767695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
The goal of this study was to examine the effect of hearing loss on theta and alpha electroencephalography (EEG) frequency power measures of performance monitoring and cognitive inhibition, respectively, during a speech-in-noise task. It was hypothesized that hearing loss would be associated with an increase in the peak power of theta and alpha frequencies toward easier conditions compared to normal hearing adults. The shift would reflect how hearing loss modulates the recruitment of listening effort to easier listening conditions. Nine older adults with normal hearing (ONH) and 10 older adults with hearing loss (OHL) participated in this study. EEG data were collected from all participants while they completed the words-in-noise task. It hypothesized that hearing loss would also have an effect on theta and alpha power. The ONH group showed an inverted U -shape effect of signal-to-noise ratio (SNR), but there were limited effects of SNR on theta or alpha power in the OHL group. The results of the ONH group support the growing body of literature showing effects of listening conditions on alpha and theta power. The null results of listening condition in the OHL group add to a smaller body of literature, suggesting that listening effort research conditions should have near ceiling performance.
Collapse
Affiliation(s)
- David B. Ryan
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
| | - Mark A. Eckert
- Department of Otolaryngology - Head and Neck Surgery, Hearing Research Program, Medical University of South Carolina, Charleston, North Carolina
| | - Eric W. Sellers
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Kim S. Schairer
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Audiology and Speech Language Pathology, East Tennessee State University, Johnson City, Tennessee
| | - Matthew T. McBee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Elizabeth A. Ridley
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Sherri L. Smith
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
- Center for the Study of Aging and Human Development, Duke University, Durham, North Carolina
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina
- Audiology and Speech Pathology Service, Durham Veterans Affairs Healthcare System, Durham, North Carolina
| |
Collapse
|
9
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Skoglund MA, Andersen M, Shiell MM, Keidser G, Rank ML, Rotger-Griful S. Comparing In-ear EOG for Eye-Movement Estimation With Eye-Tracking: Accuracy, Calibration, and Speech Comprehension. Front Neurosci 2022; 16:873201. [PMID: 35844213 PMCID: PMC9279575 DOI: 10.3389/fnins.2022.873201] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 05/25/2022] [Indexed: 12/02/2022] Open
Abstract
This presentation details and evaluates a method for estimating the attended speaker during a two-person conversation by means of in-ear electro-oculography (EOG). Twenty-five hearing-impaired participants were fitted with molds equipped with EOG electrodes (in-ear EOG) and wore eye-tracking glasses while watching a video of two life-size people in a dialog solving a Diapix task. The dialogue was directionally presented and together with background noise in the frontal hemisphere at 60 dB SPL. During three conditions of steering (none, in-ear EOG, conventional eye-tracking), participants' comprehension was periodically measured using multiple-choice questions. Based on eye movement detection by in-ear EOG or conventional eye-tracking, the estimated attended speaker was amplified by 6 dB. In the in-ear EOG condition, the estimate was based on one selected channel pair of electrodes out of 36 possible electrodes. A novel calibration procedure introducing three different metrics was used to select the measurement channel. The in-ear EOG attended speaker estimates were compared to those of the eye-tracker. Across participants, the mean accuracy of in-ear EOG estimation of the attended speaker was 68%, ranging from 50 to 89%. Based on offline simulation, it was established that higher scoring metrics obtained for a channel with the calibration procedure were significantly associated with better data quality. Results showed a statistically significant improvement in comprehension of about 10% in both steering conditions relative to the no-steering condition. Comprehension in the two steering conditions was not significantly different. Further, better comprehension obtained under the in-ear EOG condition was significantly correlated with more accurate estimation of the attended speaker. In conclusion, this study shows promising results in the use of in-ear EOG for visual attention estimation with potential for applicability in hearing assistive devices.
Collapse
Affiliation(s)
- Martin A. Skoglund
- Division of Automatic Control, Department of Electrical Engineering, The Institute of Technology, Linköping University, Linkoping, Sweden
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
- *Correspondence: Martin A. Skoglund
| | | | - Martha M. Shiell
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
| | - Gitte Keidser
- Eriksholm Research Centre, Part of Oticon A/S, Snekkersten, Denmark
- Department of Behavioral Sciences and Learning, Linneaus Centre Head, Linköping University, Linkoping, Sweden
| | | | | |
Collapse
|
11
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
12
|
Neal K, McMahon CM, Hughes SE, Boisvert I. Listening-Based Communication Ability in Adults With Hearing Loss: A Scoping Review of Existing Measures. Front Psychol 2022; 13:786347. [PMID: 35360643 PMCID: PMC8960922 DOI: 10.3389/fpsyg.2022.786347] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Hearing loss in adults has a pervasive impact on health and well-being. Its effects on everyday listening and communication can directly influence participation across multiple spheres of life. These impacts, however, remain poorly assessed within clinical settings. Whilst various tests and questionnaires that measure listening and communication abilities are available, there is a lack of consensus about which measures assess the factors that are most relevant to optimising auditory rehabilitation. This study aimed to map current measures used in published studies to evaluate listening skills needed for oral communication in adults with hearing loss. Methods A scoping review was conducted using systematic searches in Medline, EMBASE, Web of Science and Google Scholar to retrieve peer-reviewed articles that used one or more linguistic-based measure necessary to oral communication in adults with hearing loss. The range of measures identified and their frequency where charted in relation to auditory hierarchies, linguistic domains, health status domains, and associated neuropsychological and cognitive domains. Results 9121 articles were identified and 2579 articles that reported on 6714 discrete measures were included for further analysis. The predominant linguistic-based measure reported was word or sentence identification in quiet (65.9%). In contrast, discourse-based measures were used in 2.7% of the articles included. Of the included studies, 36.6% used a self-reported instrument purporting to measures of listening for communication. Consistent with previous studies, a large number of self-reported measures were identified (n = 139), but 60.4% of these measures were used in only one study and 80.7% were cited five times or fewer. Discussion Current measures used in published studies to assess listening abilities relevant to oral communication target a narrow set of domains. Concepts of communicative interaction have limited representation in current measurement. The lack of measurement consensus and heterogeneity amongst the assessments limit comparisons across studies. Furthermore, extracted measures rarely consider the broader linguistic, cognitive and interactive elements of communication. Consequently, existing measures may have limited clinical application if assessing the listening-related skills required for communication in daily life, as experienced by adults with hearing loss.
Collapse
Affiliation(s)
- Katie Neal
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia
| | - Catherine M McMahon
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia.,Hearing, Macquarie University, Sydney, NSW, Australia
| | - Sarah E Hughes
- Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom.,National Institute of Health Research (NIHR), Applied Research Collaboration (ARC), West Midlands, United Kingdom.,Faculty of Medicine, Health and Life Science, Swansea University, Swansea, United Kingdom
| | - Isabelle Boisvert
- Hearing, Macquarie University, Sydney, NSW, Australia.,Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
13
|
Baboukani PS, Graversen C, Alickovic E, Ostergaard J. EEG Phase Synchrony Reflects SNR Levels During Continuous Speech-in-Noise Tasks. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:531-534. [PMID: 34891349 DOI: 10.1109/embc46164.2021.9630139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Comprehension of speech in noise is a challenge for hearing-impaired (HI) individuals. Electroencephalography (EEG) provides a tool to investigate the effect of different levels of signal-to-noise ratio (SNR) of the speech. Most studies with EEG have focused on spectral power in well-defined frequency bands such as alpha band. In this study, we investigate how local functional connectivity, i.e. functional connectivity within a localized region of the brain, is affected by two levels of SNR. Twenty-two HI participants performed a continuous speech in noise task at two different SNRs (+3 dB and +8 dB). The local connectivity within eight regions of interest was computed by using a multivariate phase synchrony measure on EEG data. The results showed that phase synchrony increased in the parietal and frontal area as a response to increasing SNR. We contend that local connectivity measures can be used to discriminate between speech-evoked EEG responses at different SNRs.
Collapse
|
14
|
Abstract
Hearing aids continue to acquire increasingly sophisticated sound-processing features beyond basic amplification. On the one hand, these have the potential to add user benefit and allow for personalization. On the other hand, if such features are to benefit according to their potential, they require clinicians to be acquainted with both the underlying technologies and the specific fitting handles made available by the individual hearing aid manufacturers. Ensuring benefit from hearing aids in typical daily listening environments requires that the hearing aids handle sounds that interfere with communication, generically referred to as “noise.” With this aim, considerable efforts from both academia and industry have led to increasingly advanced algorithms that handle noise, typically using the principles of directional processing and postfiltering. This article provides an overview of the techniques used for noise reduction in modern hearing aids. First, classical techniques are covered as they are used in modern hearing aids. The discussion then shifts to how deep learning, a subfield of artificial intelligence, provides a radically different way of solving the noise problem. Finally, the results of several experiments are used to showcase the benefits of recent algorithmic advances in terms of signal-to-noise ratio, speech intelligibility, selective attention, and listening effort.
Collapse
|
15
|
Keidser G, Naylor G, Brungart DS, Caduff A, Campos J, Carlile S, Carpenter MG, Grimm G, Hohmann V, Holube I, Launer S, Lunner T, Mehra R, Rapport F, Slaney M, Smeds K. The Quest for Ecological Validity in Hearing Science: What It Is, Why It Matters, and How to Advance It. Ear Hear 2021; 41 Suppl 1:5S-19S. [PMID: 33105255 PMCID: PMC7676618 DOI: 10.1097/aud.0000000000000944] [Citation(s) in RCA: 62] [Impact Index Per Article: 20.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Accepted: 07/10/2020] [Indexed: 12/03/2022]
Abstract
Ecological validity is a relatively new concept in hearing science. It has been cited as relevant with increasing frequency in publications over the past 20 years, but without any formal conceptual basis or clear motive. The sixth Eriksholm Workshop was convened to develop a deeper understanding of the concept for the purpose of applying it in hearing research in a consistent and productive manner. Inspired by relevant debate within the field of psychology, and taking into account the World Health Organization's International Classification of Functioning, Disability, and Health framework, the attendees at the workshop reached a consensus on the following definition: "In hearing science, ecological validity refers to the degree to which research findings reflect real-life hearing-related function, activity, or participation." Four broad purposes for striving for greater ecological validity in hearing research were determined: A (Understanding) better understanding the role of hearing in everyday life; B (Development) supporting the development of improved procedures and interventions; C (Assessment) facilitating improved methods for assessing and predicting ability to accomplish real-world tasks; and D (Integration and Individualization) enabling more integrated and individualized care. Discussions considered the effects of variables and phenomena commonly present in hearing-related research on the level of ecological validity of outcomes, supported by examples from a few selected outcome domains and for different types of studies. Illustrated with examples, potential strategies were offered for promoting a high level of ecological validity in a study and for how to evaluate the level of ecological validity of a study. Areas in particular that could benefit from more research to advance ecological validity in hearing science include: (1) understanding the processes of hearing and communication in everyday listening situations, and specifically the factors that make listening difficult in everyday situations; (2) developing new test paradigms that include more than one person (e.g., to encompass the interactive nature of everyday communication) and that are integrative of other factors that interact with hearing in real-life function; (3) integrating new and emerging technologies (e.g., virtual reality) with established test methods; and (4) identifying the key variables and phenomena affecting the level of ecological validity to develop verifiable ways to increase ecological validity and derive a set of benchmarks to strive for.
Collapse
Affiliation(s)
- Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | | | - Andreas Caduff
- Applied Physics Department and the Center for Electromagnetic Research and Characterization, The Hebrew University of Jerusalem, Jerusalem, Israel
| | - Jennifer Campos
- KITE—Toronto Rehabilitation Institute, University Health Network, Toronto, Canada
| | - Simon Carlile
- School of Medical Sciences, University of Sydney, Sydney, Australia
- X-The Moonshot Factory, Mountain View, California, USA
| | - Mark G. Carpenter
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Giso Grimm
- Auditory Signal Processing and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Volker Hohmann
- Auditory Signal Processing and Cluster of Excellence “Hearing4all”, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, and Cluster of Excellence “Hearing4all”, Oldenburg, Germany
| | - Stefan Launer
- Department of Science and Technology, Sonova AG, Staefa, Switzerland
| | - Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Ravish Mehra
- Facebook Reality Labs Research, Redmond, Washington, DC, USA
| | - Frances Rapport
- Australian Institute of Health Innovation, Macquarie University, Sydney, Australia
| | - Malcolm Slaney
- Machine Hearing Group, Google Research, Mountain View, California, USA
| | | |
Collapse
|
16
|
Alickovic E, Ng EHN, Fiedler L, Santurette S, Innes-Brown H, Graversen C. Effects of Hearing Aid Noise Reduction on Early and Late Cortical Representations of Competing Talkers in Noise. Front Neurosci 2021; 15:636060. [PMID: 33841081 PMCID: PMC8032942 DOI: 10.3389/fnins.2021.636060] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Accepted: 02/26/2021] [Indexed: 11/13/2022] Open
Abstract
OBJECTIVES Previous research using non-invasive (magnetoencephalography, MEG) and invasive (electrocorticography, ECoG) neural recordings has demonstrated the progressive and hierarchical representation and processing of complex multi-talker auditory scenes in the auditory cortex. Early responses (<85 ms) in primary-like areas appear to represent the individual talkers with almost equal fidelity and are independent of attention in normal-hearing (NH) listeners. However, late responses (>85 ms) in higher-order non-primary areas selectively represent the attended talker with significantly higher fidelity than unattended talkers in NH and hearing-impaired (HI) listeners. Motivated by these findings, the objective of this study was to investigate the effect of a noise reduction scheme (NR) in a commercial hearing aid (HA) on the representation of complex multi-talker auditory scenes in distinct hierarchical stages of the auditory cortex by using high-density electroencephalography (EEG). DESIGN We addressed this issue by investigating early (<85 ms) and late (>85 ms) EEG responses recorded in 34 HI subjects fitted with HAs. The HA noise reduction (NR) was either on or off while the participants listened to a complex auditory scene. Participants were instructed to attend to one of two simultaneous talkers in the foreground while multi-talker babble noise played in the background (+3 dB SNR). After each trial, a two-choice question about the content of the attended speech was presented. RESULTS Using a stimulus reconstruction approach, our results suggest that the attention-related enhancement of neural representations of target and masker talkers located in the foreground, as well as suppression of the background noise in distinct hierarchical stages is significantly affected by the NR scheme. We found that the NR scheme contributed to the enhancement of the foreground and of the entire acoustic scene in the early responses, and that this enhancement was driven by better representation of the target speech. We found that the target talker in HI listeners was selectively represented in late responses. We found that use of the NR scheme resulted in enhanced representations of the target and masker speech in the foreground and a suppressed representation of the noise in the background in late responses. We found a significant effect of EEG time window on the strengths of the cortical representation of the target and masker. CONCLUSION Together, our analyses of the early and late responses obtained from HI listeners support the existing view of hierarchical processing in the auditory cortex. Our findings demonstrate the benefits of a NR scheme on the representation of complex multi-talker auditory scenes in different areas of the auditory cortex in HI listeners.
Collapse
Affiliation(s)
- Emina Alickovic
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Linkoping University, Linkoping, Sweden
| | - Elaine Hoi Ning Ng
- Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark
- Department of Behavioral Sciences and Learning, Linkoping University, Linkoping, Sweden
| | - Lorenz Fiedler
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
| | - Sébastien Santurette
- Centre for Applied Audiology Research, Oticon A/S, Smørum, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | | | | |
Collapse
|
17
|
Editorial: Eriksholm Workshop on Ecologically Valid Assessments of Hearing and Hearing Devices. Ear Hear 2020; 41 Suppl 1:1S-4S. [PMID: 33105254 DOI: 10.1097/aud.0000000000000933] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
18
|
|
19
|
|