51
|
Chang CJ, Sun CH, Hsu CJ, Chiu T, Yu SH, Wu HP. Cochlear implant mapping strategy to solve difficulty in speech recognition. J Chin Med Assoc 2022; 85:874-879. [PMID: 35666612 DOI: 10.1097/jcma.0000000000000748] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
BACKGROUND Cochlear implants (CIs) are viable treatment options in patients with severe to profound hearing loss. Speech recognition difficulties were reported in some CI recipients even with a good-aided hearing threshold. The aim of this study was to report a mapping strategy based on different target-aided hearing thresholds to achieve optimal speech recognition and maximize functional outcomes. The safety and efficacy of the mapping strategy were also inspected in the article. METHODS This prospective repeated measures study enrolled 20 adult CI recipients with postlingual deafness using the MED-EL CI system. Word and sentence discrimination assessment and administration of a questionnaire pertaining to comfort level were conducted at the end of each session. The electrophysiological features of the CI mapping were recorded. RESULTS The correlation between audiometry results and word and sentence recognition was not high. CIs performed best at an audiometry threshold between 25 and 35 dB. CONCLUSION CI performance with the best perception relies on a balance between minimizing the hearing threshold and maximizing the dynamic range while maintaining an appropriate comfort level, which was achieved when the target hearing threshold was set at 25-35 dB in this study.
Collapse
Affiliation(s)
- Chan-Jung Chang
- Department of Otolaryngology, Head and Neck Surgery, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan, ROC
| | - Chuan-Hung Sun
- Department of Otolaryngology, Head and Neck Surgery, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan, ROC
- School of Medicine, Tzu Chi University, Hualien, Taiwan, ROC
| | - Chuan-Jen Hsu
- Department of Otolaryngology, Head and Neck Surgery, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan, ROC
| | - Ting Chiu
- Department of Otolaryngology, Head and Neck Surgery, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan, ROC
| | - Szu-Hui Yu
- Department of Music, Tainan University of Technology, Tainan, Taiwan, ROC
| | - Hung-Pin Wu
- Department of Otolaryngology, Head and Neck Surgery, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan, ROC
- School of Medicine, Tzu Chi University, Hualien, Taiwan, ROC
| |
Collapse
|
52
|
Koh SM, Cho YS, Kim GY, Jo M, Seol HY, Moon IJ. Percutaneous Bone-Anchored Hearing Implant: Is It Clinically Useful in Korean? J Korean Med Sci 2022; 37:e182. [PMID: 35698836 PMCID: PMC9194490 DOI: 10.3346/jkms.2022.37.e182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Accepted: 05/11/2022] [Indexed: 11/29/2022] Open
Abstract
BACKGROUND The aim of this study is to investigate the clinical effectiveness of Ponto in Korea, a recently released percutaneous bone-anchored hearing implant. METHODS 16 patients with single-sided deafness (SSD) and mixed or conductive hearing loss who underwent Ponto implantation from December 2018 to September 2020 were enrolled in the study. Puretone audiometry, the Korean version of the Hearing in Noise Test (K-HINT), sound localization test (SLT), and Pupillometry were performed pre- and three months post-operation. Standardized questionnaires, the Hearing Handicap Inventory for the Elderly (HHIE) and Speech, Spatial and Qualities of Hearing Scale (SSQ), were administered. RESULTS The mean age of subjects was 55.5 (range, 48-67) years. Four males and 12 females participated in the study. The mean puretone average was 73.17 dB hearing level (HL) before surgery and significantly improved to 36.72 dB HL three months after surgery. The mean word recognition score improved from 26.0% to 90.75% after implantation. In the case of K-HINT, there was a significant difference in summation (Z = -2.250, P = 0.024) and head shadow effects (Z = -3.103, P = 0.002). There was no significant difference in root mean square error degree (RMSE) and hemifield identification scores for SLT testing. Pupillometry was performed to measure listening effort and the results revealed that the degree of pupillary dilatation decreased under the condition of quiet, 0 dB signal to noise ratio (SNR) and 3 dB SNR. The total score for HHIE decreased significantly (Z = -3.130, P = 0.002) while the SSQ score increased significantly (Z = -2.216, P = 0.027). CONCLUSIONS The Ponto bone-anchored hearing system showed significant clinical benefit in Korean patients with conductive and mixed hearing loss and SSD.
Collapse
Affiliation(s)
- Sung Min Koh
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Young Sang Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Ga-Young Kim
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Mini Jo
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
| | - Hye Yoon Seol
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Korea
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
- Hearing Research Laboratory, Samsung Medical Center, Seoul, Korea
- Medical Research Institute, Sungkyunkwan University School of Medicine, Suwon, Korea.
| |
Collapse
|
53
|
Kiliç S, Yiğit Ö, Türkyilmaz MD. Listening Effort in Hearing Aid Users: Is It Related to Hearing Aid Use and Satisfaction? J Am Acad Audiol 2022; 33:316-323. [PMID: 35642283 DOI: 10.1055/a-1865-3449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
BACKGROUND Listening effort is primarily reflective of real-world performance. Therefore, it is crucial to evaluate the listening effort to predict the performance of hearing aid (HA) users in their daily lives. PURPOSE This study aimed to investigate the relationship between listening effort, daily HA use time, and HA satisfaction. RESEARCH DESIGN This is a cross-sectional study. STUDY SAMPLE Thirty-three bilateral behind-the-ear HA users (17 females and 16 males) between 19 and 37 years were participated. All participants had bilateral, symmetric, moderate sensorineural hearing loss and at least 6 months of experience using HAs. The pure-tone average thresholds (PTA) of the participants' left and right ears were 55.34 ± 4.38 and 54.85 ± 5.05, respectively. DATA COLLECTION AND ANALYSIS First, daily HA use times of the last 30 days were derived from data logging. Second, participants were asked to fill in the Satisfaction with Amplification in Daily Life Scale questionnaire (SADL). Lastly, participants performed the dual-task paradigm to evaluate listening effort. The dual-task paradigm consisted of a primary speech recognition task that included three different individualized signal-to-noise ratio (SNR) conditions, that is, SNR100, SNR80, and SNR50, which the participant could understand 100, 80, and 50% of the speech, respectively. The secondary task was a visual reaction time task that required participants to press the key in response to a visual probe (an image of a white or red rectangle). Multiple linear regression analyses were used to model the effect of factors (daily HA use time and HA satisfaction) on reaction times (RT) of each three individualized SNR sessions. RESULTS Mean daily HA use time of the participants was 5.72 ± 4.14 hours. Mean RTs of SNR50, SNR80, and SNR100 conditions were 1,050.61 ± 286.49, 893.33 ± 274.79, and 815.45 ± 233.22 ms, respectively. Multiple linear regression analyses showed that daily HA use time and HA satisfaction are significantly related to listening effort in all SNR conditions. For SNR80 condition; F (2,30) = 47.699, p < 0.001, with an adjusted R 2 of 0.745. CONCLUSION As far as we know, this study is the first to demonstrate a strong link between listening effort, daily HA use time, and HA satisfaction. Evaluating listening effort following the HA fitting session may provide preliminary information about the treatment success of HA.
Collapse
Affiliation(s)
- Samet Kiliç
- Department of Audiology, Hacettepe University, Sihhiye, Ankara, Turkey
| | - Öznur Yiğit
- Department of Audiology, Hacettepe University, Sihhiye, Ankara, Turkey
| | | |
Collapse
|
54
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
55
|
Comparing methods of analysis in pupillometry: application to the assessment of listening effort in hearing-impaired patients. Heliyon 2022; 8:e09631. [PMID: 35734572 PMCID: PMC9207619 DOI: 10.1016/j.heliyon.2022.e09631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2021] [Revised: 12/26/2021] [Accepted: 05/26/2022] [Indexed: 11/20/2022] Open
|
56
|
Zhou X, Sobczak GS, McKay CM, Litovsky RY. Effects of degraded speech processing and binaural unmasking investigated using functional near-infrared spectroscopy (fNIRS). PLoS One 2022; 17:e0267588. [PMID: 35468160 PMCID: PMC9037936 DOI: 10.1371/journal.pone.0267588] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 04/11/2022] [Indexed: 12/24/2022] Open
Abstract
The present study aimed to investigate the effects of degraded speech perception and binaural unmasking using functional near-infrared spectroscopy (fNIRS). Normal hearing listeners were tested when attending to unprocessed or vocoded speech, presented to the left ear at two speech-to-noise ratios (SNRs). Additionally, by comparing monaural versus diotic masker noise, we measured binaural unmasking. Our primary research question was whether the prefrontal cortex and temporal cortex responded differently to varying listening configurations. Our a priori regions of interest (ROIs) were located at the left dorsolateral prefrontal cortex (DLPFC) and auditory cortex (AC). The left DLPFC has been reported to be involved in attentional processes when listening to degraded speech and in spatial hearing processing, while the AC has been reported to be sensitive to speech intelligibility. Comparisons of cortical activity between these two ROIs revealed significantly different fNIRS response patterns. Further, we showed a significant and positive correlation between self-reported task difficulty levels and fNIRS responses in the DLPFC, with a negative but non-significant correlation for the left AC, suggesting that the two ROIs played different roles in effortful speech perception. Our secondary question was whether activity within three sub-regions of the lateral PFC (LPFC) including the DLPFC was differentially affected by varying speech-noise configurations. We found significant effects of spectral degradation and SNR, and significant differences in fNIRS response amplitudes between the three regions, but no significant interaction between ROI and speech type, or between ROI and SNR. When attending to speech with monaural and diotic noises, participants reported the latter conditions being easier; however, no significant main effect of masker condition on cortical activity was observed. For cortical responses in the LPFC, a significant interaction between SNR and masker condition was observed. These findings suggest that binaural unmasking affects cortical activity through improving speech reception threshold in noise, rather than by reducing effort exerted.
Collapse
Affiliation(s)
- Xin Zhou
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Gabriel S. Sobczak
- School of Medicine and Public Health, University of Wisconsin-Madison, Madison, WI, United States of America
| | - Colette M. McKay
- The Bionics Institute of Australia, Melbourne, VIC, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, VIC, Australia
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States of America
- Department of Communication Science and Disorders, University of Wisconsin-Madison, Madison, WI, United States of America
- Division of Otolaryngology, Department of Surgery, University of Wisconsin-Madison, Madison, WI, United States of America
| |
Collapse
|
57
|
Tamati TN, Sevich VA, Clausing EM, Moberly AC. Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners. Front Psychol 2022; 13:837644. [PMID: 35432072 PMCID: PMC9010567 DOI: 10.3389/fpsyg.2022.837644] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 02/16/2022] [Indexed: 11/13/2022] Open
Abstract
When listening to degraded speech, such as speech delivered by a cochlear implant (CI), listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener's age. The current study investigated lexical effects in the compensation for speech that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing (NH) listeners rated the clarity of noise-vocoded sentences on a scale from 1 ("very unclear") to 7 ("completely clear"). Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal.
Collapse
Affiliation(s)
- Terrin N. Tamati
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Victoria A. Sevich
- Department of Speech and Hearing Science, The Ohio State University, Columbus, OH, United States
| | - Emily M. Clausing
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| | - Aaron C. Moberly
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, OH, United States
| |
Collapse
|
58
|
Saksida A, Ghiselli S, Picinali L, Pintonello S, Battelino S, Orzan E. Attention to Speech and Music in Young Children with Bilateral Cochlear Implants: A Pupillometry Study. J Clin Med 2022; 11:jcm11061745. [PMID: 35330071 PMCID: PMC8956090 DOI: 10.3390/jcm11061745] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/05/2022] [Accepted: 03/16/2022] [Indexed: 12/10/2022] Open
Abstract
Early bilateral cochlear implants (CIs) may enhance attention to speech, and reduce cognitive load in noisy environments. However, it is sometimes difficult to measure speech perception and listening effort, especially in very young children. Behavioral measures cannot always be obtained in young/uncooperative children, whereas objective measures are either difficult to assess or do not reliably correlate with behavioral measures. Recent studies have thus explored pupillometry as a possible objective measure. Here, pupillometry is introduced to assess attention to speech and music in noise in very young children with bilateral CIs (N = 14, age: 17–47 months), and in the age-matched group of normally-hearing (NH) children (N = 14, age: 22–48 months). The results show that the response to speech was affected by the presence of background noise only in children with CIs, but not NH children. Conversely, the presence of background noise altered pupil response to music only in in NH children. We conclude that whereas speech and music may receive comparable attention in comparable listening conditions, in young children with CIs, controlling for background noise affects attention to speech and speech processing more than in NH children. Potential implementations of the results for rehabilitation procedures are discussed.
Collapse
Affiliation(s)
- Amanda Saksida
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
- Correspondence:
| | - Sara Ghiselli
- Ospedale Guglielmo da Saliceto, 29121 Piacenza, Italy;
| | - Lorenzo Picinali
- Dyson School of Design Engineering, Imperial College London, London SW7 2DB, UK;
| | - Sara Pintonello
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
| | - Saba Battelino
- Faculty of Medicine, University of Ljubljana, University Medical Centre Ljubljana, SI-1000 Ljubljana, Slovenia;
| | - Eva Orzan
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”—Trieste, 34137 Trieste, Italy; (S.P.); (E.O.)
| |
Collapse
|
59
|
Corcoran AW, Perera R, Koroma M, Kouider S, Hohwy J, Andrillon T. Expectations boost the reconstruction of auditory features from electrophysiological responses to noisy speech. Cereb Cortex 2022; 33:691-708. [PMID: 35253871 PMCID: PMC9890472 DOI: 10.1093/cercor/bhac094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Corresponding author: Room E672, 20 Chancellors Walk, Clayton, VIC 3800, Australia.
| | - Ricardo Perera
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Matthieu Koroma
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Sid Kouider
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Jakob Hohwy
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia,Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Thomas Andrillon
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia,Paris Brain Institute, Sorbonne Université, Inserm-CNRS, Paris 75013, France
| |
Collapse
|
60
|
Abdel-Latif KHA, Meister H. Speech Recognition and Listening Effort in Cochlear Implant Recipients and Normal-Hearing Listeners. Front Neurosci 2022; 15:725412. [PMID: 35221883 PMCID: PMC8867819 DOI: 10.3389/fnins.2021.725412] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Accepted: 12/23/2021] [Indexed: 11/13/2022] Open
Abstract
The outcome of cochlear implantation is typically assessed by speech recognition tests in quiet and in noise. Many cochlear implant recipients reveal satisfactory speech recognition especially in quiet situations. However, since cochlear implants provide only limited spectro-temporal cues the effort associated with understanding speech might be increased. In this respect, measures of listening effort could give important extra information regarding the outcome of cochlear implantation. In order to shed light on this topic and to gain knowledge for clinical applications we compared speech recognition and listening effort in cochlear implants (CI) recipients and age-matched normal-hearing listeners while considering potential influential factors, such as cognitive abilities. Importantly, we estimated speech recognition functions for both listener groups and compared listening effort at similar performance level. Therefore, a subjective listening effort test (adaptive scaling, “ACALES”) as well as an objective test (dual-task paradigm) were applied and compared. Regarding speech recognition CI users needed about 4 dB better signal-to-noise ratio to reach the same performance level of 50% as NH listeners and even 5 dB better SNR to reach 80% speech recognition revealing shallower psychometric functions in the CI listeners. However, when targeting a fixed speech intelligibility of 50 and 80%, respectively, CI users and normal hearing listeners did not differ significantly in terms of listening effort. This applied for both the subjective and the objective estimation. Outcome for subjective and objective listening effort was not correlated with each other nor with age or cognitive abilities of the listeners. This study did not give evidence that CI users and NH listeners differ in terms of listening effort – at least when the same performance level is considered. In contrast, both listener groups showed large inter-individual differences in effort determined with the subjective scaling and the objective dual-task. Potential clinical implications of how to assess listening effort as an outcome measure for hearing rehabilitation are discussed.
Collapse
|
61
|
Dingemanse G, Goedegebure A. Listening Effort in Cochlear Implant Users: The Effect of Speech Intelligibility, Noise Reduction Processing, and Working Memory Capacity on the Pupil Dilation Response. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:392-404. [PMID: 34898265 DOI: 10.1044/2021_jslhr-21-00230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study aimed to evaluate the effect of speech recognition performance, working memory capacity (WMC), and a noise reduction algorithm (NRA) on listening effort as measured with pupillometry in cochlear implant (CI) users while listening to speech in noise. METHOD Speech recognition and pupil responses (peak dilation, peak latency, and release of dilation) were measured during a speech recognition task at three speech-to-noise ratios (SNRs) with an NRA in both on and off conditions. WMC was measured with a reading span task. Twenty experienced CI users participated in this study. RESULTS With increasing SNR and speech recognition performance, (a) the peak pupil dilation decreased by only a small amount, (b) the peak latency decreased, and (c) the release of dilation after the sentences increased. The NRA had no effect on speech recognition in noise or on the peak or latency values of the pupil response but caused less release of dilation after the end of the sentences. A lower reading span score was associated with higher peak pupil dilation but was not associated with peak latency, release of dilation, or speech recognition in noise. CONCLUSIONS In CI users, speech perception is effortful, even at higher speech recognition scores and high SNRs, indicating that CI users are in a chronic state of increased effort in communication situations. The application of a clinically used NRA did not improve speech perception, nor did it reduce listening effort. Participants with a relatively low WMC exerted relatively more listening effort but did not have better speech reception thresholds in noise.
Collapse
Affiliation(s)
- Gertjan Dingemanse
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
62
|
Perea Pérez F, Hartley DEH, Kitterick PT, Wiggins IM. Perceived Listening Difficulties of Adult Cochlear-Implant Users Under Measures Introduced to Combat the Spread of COVID-19. Trends Hear 2022; 26:23312165221087011. [PMID: 35440245 PMCID: PMC9024163 DOI: 10.1177/23312165221087011] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
Following the outbreak of the COVID-19 pandemic, public-health measures introduced to stem the spread of the disease caused profound changes to patterns of daily-life communication. This paper presents the results of an online survey conducted to document adult cochlear-implant (CI) users’ perceived listening difficulties under four communication scenarios commonly experienced during the pandemic, specifically when talking: with someone wearing a facemask, under social/physical distancing guidelines, via telephone, and via video call. Results from ninety-four respondents indicated that people considered their in-person listening experiences in some common everyday scenarios to have been significantly worsened by the introduction of mask-wearing and physical distancing. Participants reported experiencing an array of listening difficulties, including reduced speech intelligibility and increased listening effort, which resulted in many people actively avoiding certain communication scenarios at least some of the time. Participants also found listening effortful during remote communication, which became rapidly more prevalent following the outbreak of the pandemic. Potential solutions identified by participants to ease the burden of everyday listening with a CI may have applicability beyond the context of the COVID-19 pandemic. Specifically, the results emphasized the importance of visual cues, including lipreading and live speech-to-text transcriptions, to improve in-person and remote communication for people with a CI.
Collapse
Affiliation(s)
- Francisca Perea Pérez
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK
| | - Douglas E H Hartley
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK.,9820Nottingham University Hospitals NHS Trust, Nottingham, UK
| | - Pádraig T Kitterick
- Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK.,National Acoustic Laboratories, Sydney, Australia
| | - Ian M Wiggins
- 574111National Institute for Health Research (NIHR) Nottingham Biomedical Research Centre, Nottingham, UK.,Hearing Sciences, Division of Mental Health and Clinical Neurosciences, School of Medicine, 6123University of Nottingham, Nottingham, UK
| |
Collapse
|
63
|
Grenzebach J, Romanus E. Quantifying the Effect of Noise on Cognitive Processes: A Review of Psychophysiological Correlates of Workload. Noise Health 2022; 24:199-214. [PMID: 36537445 PMCID: PMC10088430 DOI: 10.4103/nah.nah_34_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Noise is present in most work environments, including emissions from machines and devices, irrelevant speech from colleagues, and traffic noise. Although it is generally accepted that noise below the permissible exposure limits does not pose a considerable risk for auditory effects like hearing impairments. Yet, noise can have a direct adverse effect on cognitive performance (non-auditory effects like workload or stress). Under certain circumstances, the observable performance for a task carried out in silence compared to noisy surroundings may not differ. One possible explanation for this phenomenon needs further investigation: individuals may invest additional cognitive resources to overcome the distraction from irrelevant auditory stimulation. Recent developments in measurements of psychophysiological correlates and analysis methods of load-related parameters can shed light on this complex interaction. These objective measurements complement subjective self-report of perceived effort by quantifying unnoticed noise-related cognitive workload. In this review, literature databases were searched for peer-reviewed journal articles that deal with an at least partially irrelevant "auditory stimulation" during an ongoing "cognitive task" that is accompanied by "psychophysiological correlates" to quantify the "momentary workload." The spectrum of assessed types of "auditory stimulations" extended from speech stimuli (varying intelligibility), oddball sounds (repeating short tone sequences), and auditory stressors (white noise, task-irrelevant real-life sounds). The type of "auditory stimulation" was related (speech stimuli) or unrelated (oddball, auditory stressor) to the type of primary "cognitive task." The types of "cognitive tasks" include speech-related tasks, fundamental psychological assessment tasks, and real-world/simulated tasks. The "psychophysiological correlates" include pupillometry and eye-tracking, recordings of brain activity (hemodynamic, potentials), cardiovascular markers, skin conductance, endocrinological markers, and behavioral markers. The prevention of negative effects on health by unexpected stressful soundscapes during mental work starts with the continuous estimation of cognitive workload triggered by auditory noise. This review gives a comprehensive overview of methods that were tested for their sensitivity as markers of workload in various auditory settings during cognitive processing.
Collapse
|
64
|
Saksida A, Ghiselli S, Bembich S, Scorpecci A, Giannantonio S, Resca A, Marsella P, Orzan E. Interdisciplinary Approaches to the Study of Listening Effort in Young Children with Cochlear Implants. Audiol Res 2021; 12:1-9. [PMID: 35076472 PMCID: PMC8788282 DOI: 10.3390/audiolres12010001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 11/23/2021] [Accepted: 12/09/2021] [Indexed: 11/29/2022] Open
Abstract
Very early bilateral implantation is thought to significantly reduce the attentional effort required to acquire spoken language, and consequently offer a profound improvement in quality of life. Despite the early intervention, however, auditory and communicative outcomes in children with cochlear implants remain poorer than in hearing children. The distorted auditory input via the cochlear implants requires more auditory attention resulting in increased listening effort and fatigue. Listening effort and fatigue may critically affect attention to speech, and in turn language processing, which may help to explain the variation in language and communication abilities. However, measuring attention to speech and listening effort is demanding in infants and very young children. Three objective techniques for measuring listening effort are presented in this paper that may address the challenges of testing very young and/or uncooperative children with cochlear implants: pupillometry, electroencephalography, and functional near-infrared spectroscopy. We review the studies of listening effort that used these techniques in paediatric populations with hearing loss, and discuss potential benefits of the systematic evaluation of listening effort in these populations.
Collapse
Affiliation(s)
- Amanda Saksida
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”, 34100 Trieste, Italy; (A.S.); (S.B.)
| | - Sara Ghiselli
- “Guglielmo da Saliceto” Hospital of Piacenza, 29121 Piacenza, Italy;
| | - Stefano Bembich
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”, 34100 Trieste, Italy; (A.S.); (S.B.)
| | - Alessandro Scorpecci
- Ospedale Pediatrico Bambino Gesù, 00165 Roma, Italy; (A.S.); (S.G.); (A.R.); (P.M.)
| | - Sara Giannantonio
- Ospedale Pediatrico Bambino Gesù, 00165 Roma, Italy; (A.S.); (S.G.); (A.R.); (P.M.)
| | - Alessandra Resca
- Ospedale Pediatrico Bambino Gesù, 00165 Roma, Italy; (A.S.); (S.G.); (A.R.); (P.M.)
| | - Pasquale Marsella
- Ospedale Pediatrico Bambino Gesù, 00165 Roma, Italy; (A.S.); (S.G.); (A.R.); (P.M.)
| | - Eva Orzan
- Institute for Maternal and Child Health—IRCCS “Burlo Garofolo”, 34100 Trieste, Italy; (A.S.); (S.B.)
- Correspondence:
| |
Collapse
|
65
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
66
|
Shen J. Pupillary response to dynamic pitch alteration during speech perception in noise. JASA EXPRESS LETTERS 2021; 1:115202. [PMID: 34778875 PMCID: PMC8574131 DOI: 10.1121/10.0007056] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 10/12/2021] [Indexed: 06/13/2023]
Abstract
Dynamic pitch, also known as intonation, conveys both semantic and pragmatic meaning in speech communication. While alteration of this cue is detrimental to speech intelligibility in noise, the mechanism involved is poorly understood. Using the psychophysiological measure of task-evoked pupillary response, this study examined the perceptual effect of altered dynamic pitch cues on speech perception in noise. The data showed that pupil dilation increased with dynamic pitch strength in a sentence recognition in noise task. Taken together with recognition accuracy data, the results suggest the involvement of perceptual arousal in speech perception with dynamic pitch alteration.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, Temple University, 1701 North 13th Street, Philadelphia, Pennsylvania 19122, USA
| |
Collapse
|
67
|
Keerstock S, Smiljanic R. Reading aloud in clear speech reduces sentence recognition memory and recall for native and non-native talkers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3387. [PMID: 34852619 DOI: 10.1121/10.0006732] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 09/23/2021] [Indexed: 06/13/2023]
Abstract
Speaking style variation plays a role in how listeners remember speech. Compared to conversational sentences, clearly spoken sentences were better recalled and identified as previously heard by native and non-native listeners. The present study investigated whether speaking style variation also plays a role in how talkers remember speech that they produce. Although distinctive forms of production (e.g., singing, speaking loudly) can enhance memory, the cognitive and articulatory efforts required to plan and produce listener-oriented hyper-articulated clear speech could detrimentally affect encoding and subsequent retrieval. Native and non-native English talkers' memories for sentences that they read aloud in clear and conversational speaking styles were assessed through a sentence recognition memory task (experiment 1; N = 90) and a recall task (experiment 2; N = 75). The results showed enhanced recognition memory and recall for sentences read aloud conversationally rather than clearly for both talker groups. In line with the "effortfulness" hypothesis, producing clear speech may increase the processing load diverting resources from memory encoding. Implications for the relationship between speech perception and production are discussed.
Collapse
Affiliation(s)
- Sandie Keerstock
- Department of Psychological Sciences, University of Missouri, 124 Psychology Building, 200 South 7th Street, Columbia, Missouri 65211, USA
| | - Rajka Smiljanic
- Department of Linguistics, University of Texas at Austin, 305 East 23rd Street STOP B5100, Austin, Texas 78712, USA
| |
Collapse
|
68
|
McHaney JR, Tessmer R, Roark CL, Chandrasekaran B. Working memory relates to individual differences in speech category learning: Insights from computational modeling and pupillometry. BRAIN AND LANGUAGE 2021; 222:105010. [PMID: 34454285 DOI: 10.1016/j.bandl.2021.105010] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/08/2021] [Revised: 07/26/2021] [Accepted: 08/10/2021] [Indexed: 05/27/2023]
Abstract
Across two experiments, we examine the relationship between individual differences in working memory (WM) and the acquisition of non-native speech categories in adulthood. While WM is associated with individual differences in a variety of learning tasks, successful acquisition of speech categories is argued to be contingent on WM-independent procedural-learning mechanisms. Thus, the role of WM in speech category learning is unclear. In Experiment 1, we show that individuals with higher WM acquire non-native speech categories faster and to a greater extent than those with lower WM. In Experiment 2, we replicate these results and show that individuals with higher WM use more optimal, procedural-based learning strategies and demonstrate more distinct speech-evoked pupillary responses for correct relative to incorrect trials. We propose that higher WM may allow for greater stimulus-related attention, resulting in more robust representations and optimal learning strategies. We discuss implications for neurobiological models of speech category learning.
Collapse
Affiliation(s)
- Jacie R McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, United States
| | - Rachel Tessmer
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, United States
| | - Casey L Roark
- Department of Communication Science and Disorders, University of Pittsburgh, United States; Center for the Neural Basis of Cognition, Pittsburgh, PA, United States
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, United States.
| |
Collapse
|
69
|
Effect of Noise Reduction on Cortical Speech-in-Noise Processing and Its Variance due to Individual Noise Tolerance. Ear Hear 2021; 43:849-861. [PMID: 34751679 PMCID: PMC9010348 DOI: 10.1097/aud.0000000000001144] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Despite the widespread use of noise reduction (NR) in modern digital hearing aids, our neurophysiological understanding of how NR affects speech-in-noise perception and why its effect is variable is limited. The current study aimed to (1) characterize the effect of NR on the neural processing of target speech and (2) seek neural determinants of individual differences in the NR effect on speech-in-noise performance, hypothesizing that an individual's own capability to inhibit background noise would inversely predict NR benefits in speech-in-noise perception. DESIGN Thirty-six adult listeners with normal hearing participated in the study. Behavioral and electroencephalographic responses were simultaneously obtained during a speech-in-noise task in which natural monosyllabic words were presented at three different signal-to-noise ratios, each with NR off and on. A within-subject analysis assessed the effect of NR on cortical evoked responses to target speech in the temporal-frontal speech and language brain regions, including supramarginal gyrus and inferior frontal gyrus in the left hemisphere. In addition, an across-subject analysis related an individual's tolerance to noise, measured as the amplitude ratio of auditory-cortical responses to target speech and background noise, to their speech-in-noise performance. RESULTS At the group level, in the poorest signal-to-noise ratio condition, NR significantly increased early supramarginal gyrus activity and decreased late inferior frontal gyrus activity, indicating a switch to more immediate lexical access and less effortful cognitive processing, although no improvement in behavioral performance was found. The across-subject analysis revealed that the cortical index of individual noise tolerance significantly correlated with NR-driven changes in speech-in-noise performance. CONCLUSIONS NR can facilitate speech-in-noise processing despite no improvement in behavioral performance. Findings from the current study also indicate that people with lower noise tolerance are more likely to get more benefits from NR. Overall, results suggest that future research should take a mechanistic approach to NR outcomes and individual noise tolerance.
Collapse
|
70
|
Visentin C, Valzolgher C, Pellegatti M, Potente P, Pavani F, Prodi N. A comparison of simultaneously-obtained measures of listening effort: pupil dilation, verbal response time and self-rating. Int J Audiol 2021; 61:561-573. [PMID: 34634214 DOI: 10.1080/14992027.2021.1921290] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVE The aim of this study was to assess to what extent simultaneously-obtained measures of listening effort (task-evoked pupil dilation, verbal response time [RT], and self-rating) could be sensitive to auditory and cognitive manipulations in a speech perception task. The study also aimed to explore the possible relationship between RT and pupil dilation. DESIGN A within-group design was adopted. All participants were administered the Matrix Sentence Test in 12 conditions (signal-to-noise ratios [SNR] of -3, -6, -9 dB; attentional resources focussed vs divided; spatial priors present vs absent). STUDY SAMPLE Twenty-four normal-hearing adults, 20-41 years old (M = 23.5), were recruited in the study. RESULTS A significant effect of the SNR was found for all measures. However, pupil dilation discriminated only partially between the SNRs. Neither of the cognitive manipulations were effective in modulating the measures. No relationship emerged between pupil dilation, RT and self-ratings. CONCLUSIONS RT, pupil dilation, and self-ratings can be obtained simultaneously when administering speech perception tasks, even though some limitations remain related to the absence of a retention period after the listening phase. The sensitivity of the three measures to changes in the auditory environment differs. RTs and self-ratings proved most sensitive to changes in SNR.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Ferrara, Italy
| | - Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy.,Centre de Recherche en Neuroscience de Lyon (CRNL), Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon, France
| | | | - Paola Potente
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Trento, Italy.,Centre de Recherche en Neuroscience de Lyon (CRNL), Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon, France.,Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Ferrara, Italy
| |
Collapse
|
71
|
Lopez EM, Dillon MT, Park LR, Rooth MA, Richter ME, Thompson NJ, O'Connell BP, Pillsbury HC, Brown KD. Influence of Cochlear Implant Use on Perceived Listening Effort in Adult and Pediatric Cases of Unilateral and Asymmetric Hearing Loss. Otol Neurotol 2021; 42:e1234-e1241. [PMID: 34224547 PMCID: PMC8448920 DOI: 10.1097/mao.0000000000003261] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVE Assess the influence of cochlear implant (CI) use on the perceived listening effort of adult and pediatric subjects with unilateral hearing loss (UHL) or asymmetric hearing loss (AHL). STUDY DESIGN Prospective cohort. SETTING Tertiary referral center. PATIENTS Adults and children with UHL or AHL. INTERVENTION Cochlear implantation. Subjects received their CI as part of a clinical trial assessing the effectiveness of cochlear implantation in cases of UHL and AHL. MAIN OUTCOME MEASURES Responses to the Listening Effort pragmatic subscale on the Speech, Spatial, and Qualities of Hearing Scale (SSQ) or SSQ for Children with Impaired Hearing (SSQ-C) were compared over the study period. Subjects or their parents completed the questionnaires preoperatively and at predetermined postactivation intervals. For the adult subjects, responses were compared to word recognition in quiet and sentence recognition in noise. RESULTS Forty adult subjects (n = 20 UHL, n = 20 AHL) and 16 pediatric subjects with UHL enrolled and underwent cochlear implantation. Subjects in all three groups reported a significant reduction in perceived listening effort within the initial months of CI use (p < 0.001; η2 ≥ 0.351). The perceived benefit was significantly correlated with speech recognition in noise for the adult subjects with UHL at the 12-month interval (r(20) = .59, p = 0.006). CONCLUSIONS Adult and pediatric CI recipients with UHL or AHL report a reduction in listening effort with CI use as compared to their preoperative experiences. Use of the SSQ and SSQ-C Listening Effort pragmatic subscale may provide additional information about a CI recipient's experience beyond the abilities measured in the sound booth.
Collapse
Affiliation(s)
- Erin M Lopez
- Department of Otolaryngology/Head & Neck Surgery
| | | | - Lisa R Park
- Department of Otolaryngology/Head & Neck Surgery
| | | | - Margaret E Richter
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, School of Medicine, University of North Carolina at Chapel Hill, North Carolina
| | | | | | | | | |
Collapse
|
72
|
Lim SJ, Carter YD, Njoroge JM, Shinn-Cunningham BG, Perrachione TK. Talker discontinuity disrupts attention to speech: Evidence from EEG and pupillometry. BRAIN AND LANGUAGE 2021; 221:104996. [PMID: 34358924 PMCID: PMC8515637 DOI: 10.1016/j.bandl.2021.104996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 07/11/2021] [Accepted: 07/13/2021] [Indexed: 05/13/2023]
Abstract
Speech is processed less efficiently from discontinuous, mixed talkers than one consistent talker, but little is known about the neural mechanisms for processing talker variability. Here, we measured psychophysiological responses to talker variability using electroencephalography (EEG) and pupillometry while listeners performed a delayed recall of digit span task. Listeners heard and recalled seven-digit sequences with both talker (single- vs. mixed-talker digits) and temporal (0- vs. 500-ms inter-digit intervals) discontinuities. Talker discontinuity reduced serial recall accuracy. Both talker and temporal discontinuities elicited P3a-like neural evoked response, while rapid processing of mixed-talkers' speech led to increased phasic pupil dilation. Furthermore, mixed-talkers' speech produced less alpha oscillatory power during working memory maintenance, but not during speech encoding. Overall, these results are consistent with an auditory attention and streaming framework in which talker discontinuity leads to involuntary, stimulus-driven attentional reorientation to novel speech sources, resulting in the processing interference classically associated with talker variability.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| | - Yaminah D Carter
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | - J Michelle Njoroge
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | | | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| |
Collapse
|
73
|
Patro C, Kreft HA, Wojtczak M. The search for correlates of age-related cochlear synaptopathy: Measures of temporal envelope processing and spatial release from speech-on-speech masking. Hear Res 2021; 409:108333. [PMID: 34425347 PMCID: PMC8424701 DOI: 10.1016/j.heares.2021.108333] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 07/17/2021] [Accepted: 08/04/2021] [Indexed: 01/13/2023]
Abstract
Older adults often experience difficulties understanding speech in adverse listening conditions. It has been suggested that for listeners with normal and near-normal audiograms, these difficulties may, at least in part, arise from age-related cochlear synaptopathy. The aim of this study was to assess if performance on auditory tasks relying on temporal envelope processing reveal age-related deficits consistent with those expected from cochlear synaptopathy. Listeners aged 20 to 66 years were tested using a series of psychophysical, electrophysiological, and speech-perception measures using stimulus configurations that promote coding by medium- and low-spontaneous-rate auditory-nerve fibers. Cognitive measures of executive function were obtained to control for age-related cognitive decline. Results from the different tests were not significantly correlated with each other despite a presumed reliance on common mechanisms involved in temporal envelope processing. Only gap-detection thresholds for a tone in noise and spatial release from speech-on-speech masking were significantly correlated with age. Increasing age was related to impaired cognitive executive function. Multivariate regression analyses showed that individual differences in hearing sensitivity, envelope-based measures, and scores from nonauditory cognitive tests did not significantly contribute to the variability in spatial release from speech-on-speech masking for small target/masker spatial separation, while age was a significant contributor.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA.
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| | - Magdalena Wojtczak
- Department of Psychology, University of Minnesota, N640 Elliott Hall, 75 East River Parkway, Minneapolis, MN 55455, USA
| |
Collapse
|
74
|
Colby S, McMurray B. Cognitive and Physiological Measures of Listening Effort During Degraded Speech Perception: Relating Dual-Task and Pupillometry Paradigms. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3627-3652. [PMID: 34491779 PMCID: PMC8642090 DOI: 10.1044/2021_jslhr-20-00583] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/01/2021] [Accepted: 05/21/2021] [Indexed: 06/13/2023]
Abstract
Purpose Listening effort is quickly becoming an important metric for assessing speech perception in less-than-ideal situations. However, the relationship between the construct of listening effort and the measures used to assess it remains unclear. We compared two measures of listening effort: a cognitive dual task and a physiological pupillometry task. We sought to investigate the relationship between these measures of effort and whether engaging effort impacts speech accuracy. Method In Experiment 1, 30 participants completed a dual task and a pupillometry task that were carefully matched in stimuli and design. The dual task consisted of a spoken word recognition task and a visual match-to-sample task. In the pupillometry task, pupil size was monitored while participants completed a spoken word recognition task. Both tasks presented words at three levels of listening difficulty (unmodified, eight-channel vocoding, and four-channel vocoding) and provided response feedback on every trial. We refined the pupillometry task in Experiment 2 (n = 31); crucially, participants no longer received response feedback. Finally, we ran a new group of subjects on both tasks in Experiment 3 (n = 30). Results In Experiment 1, accuracy in the visual task decreased with increased signal degradation in the dual task, but pupil size was sensitive to accuracy and not vocoding condition. After removing feedback in Experiment 2, changes in pupil size were predicted by listening condition, suggesting the task was now sensitive to engaged effort. Both tasks were sensitive to listening difficulty in Experiment 3, but there was no relationship between the tasks and neither task predicted speech accuracy. Conclusions Consistent with previous work, we found little evidence for a relationship between different measures of listening effort. We also found no evidence that effort predicts speech accuracy, suggesting that engaging more effort does not lead to improved speech recognition. Cognitive and physiological measures of listening effort are likely sensitive to different aspects of the construct of listening effort. Supplemental Material https://doi.org/10.23641/asha.16455900.
Collapse
Affiliation(s)
- Sarah Colby
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| | - Bob McMurray
- Department of Psychological and Brain Sciences, The University of Iowa, Iowa City
| |
Collapse
|
75
|
Morett LM, Roche JM, Fraundorf SH, McPartland JC. Contrast Is in the Eye of the Beholder: Infelicitous Beat Gesture Increases Cognitive Load During Online Spoken Discourse Comprehension. Cogn Sci 2021; 44:e12912. [PMID: 33073404 DOI: 10.1111/cogs.12912] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2019] [Revised: 05/15/2020] [Accepted: 09/02/2020] [Indexed: 11/30/2022]
Abstract
We investigated how two cues to contrast-beat gesture and contrastive pitch accenting-affect comprehenders' cognitive load during processing of spoken referring expressions. In two visual-world experiments, we orthogonally manipulated the presence of these cues and their felicity, or fit, with the local (sentence-level) referential context in critical referring expressions while comprehenders' task-evoked pupillary responses (TEPRs) were examined. In Experiment 1, beat gesture and contrastive accenting always matched the referential context of filler referring expressions and were therefore relatively felicitous on the global (experiment) level, whereas in Experiment 2, beat gesture and contrastive accenting never fit the referential context of filler referring expressions and were therefore infelicitous on the global level. The results revealed that both beat gesture and contrastive accenting increased comprehenders' cognitive load. For beat gesture, this increase in cognitive load was driven by both local and global infelicity. For contrastive accenting, this increase in cognitive load was unaffected when cues were globally felicitous but exacerbated when cues were globally infelicitous. Together, these results suggest that comprehenders' cognitive resources are taxed by processing infelicitous use of beat gesture and contrastive accenting to convey contrast on both the local and global levels.
Collapse
Affiliation(s)
- Laura M Morett
- Department of Educational Studies in Psychology, Research Methodology, and Counseling, University of Alabama
| | - Jennifer M Roche
- Department of Speech Pathology and Audiology, Kent State University
| | - Scott H Fraundorf
- Department of Psychology, Learning Research and Development Center, University of Pittsburgh
| | | |
Collapse
|
76
|
Pupillometry reveals cognitive demands of lexical competition during spoken word recognition in young and older adults. Psychon Bull Rev 2021; 29:268-280. [PMID: 34405386 DOI: 10.3758/s13423-021-01991-0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/27/2021] [Indexed: 12/27/2022]
Abstract
In most contemporary activation-competition frameworks for spoken word recognition, candidate words compete against phonological "neighbors" with similar acoustic properties (e.g., "cap" vs. "cat"). Thus, recognizing words with more competitors should come at a greater cognitive cost relative to recognizing words with fewer competitors, due to increased demands for selecting the correct item and inhibiting incorrect candidates. Importantly, these processes should operate even in the absence of differences in accuracy. In the present study, we tested this proposal by examining differences in processing costs associated with neighborhood density for highly intelligible items presented in quiet. A second goal was to examine whether the cognitive demands associated with increased neighborhood density were greater for older adults compared with young adults. Using pupillometry as an index of cognitive processing load, we compared the cognitive demands associated with spoken word recognition for words with many or fewer neighbors, presented in quiet, for young (n = 67) and older (n = 69) adult listeners. Growth curve analysis of the pupil data indicated that older adults showed a greater evoked pupil response for spoken words than did young adults, consistent with increased cognitive load during spoken word recognition. Words from dense neighborhoods were marginally more demanding to process than words from sparse neighborhoods. There was also an interaction between age and neighborhood density, indicating larger effects of density in young adult listeners. These results highlight the importance of assessing both cognitive demands and accuracy when investigating the mechanisms underlying spoken word recognition.
Collapse
|
77
|
How Do We Allocate Our Resources When Listening and Memorizing Speech in Noise? A Pupillometry Study. Ear Hear 2021; 42:846-859. [PMID: 33492008 DOI: 10.1097/aud.0000000000001002] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
OBJECTIVES Actively following a conversation can be demanding and limited cognitive resources must be allocated to the processing of speech, retaining and encoding the perceived content, and preparing an answer. The aim of the present study was to disentangle the allocation of effort into the effort required for listening (listening effort) and the effort required for retention (memory effort) by means of pupil dilation. DESIGN Twenty-five normal-hearing German speaking participants underwent a sentence final word identification and recall test, while pupillometry was conducted. The participants' task was to listen to a sentence in four-talker babble background noise and to repeat the final word afterward. At the end of a list of sentences, they were asked to recall as many of the final words as possible. Pupil dilation was recorded during different list lengths (three sentences versus six sentences) and varying memory load (recall versus no recall). Additionally, the effect of a noise reduction algorithm on performance, listening effort, and memory effort was evaluated. RESULTS We analyzed pupil dilation both before each sentence (sentence baseline) as well as the dilation in response to each sentence relative to the sentence baseline (sentence dilation). The pupillometry data indicated a steeper increase of sentence baseline under recall compared to no recall, suggesting higher memory effort due to memory processing. This increase in sentence baseline was most prominent toward the end of the longer lists, that is, during the second half of six sentences. Without a recall task, sentence baseline declined over the course of the list. Noise reduction appeared to have a significant influence on effort allocation for listening, which was reflected in generally decreased sentence dilation. CONCLUSION Our results showed that recording pupil dilation in a speech identification and recall task provides valuable insights beyond behavioral performance. It is a suitable tool to disentangle the allocation of effort to listening versus memorizing speech.
Collapse
|
78
|
Abstract
Listening effort is a valuable and important notion to measure because it is among the primary complaints of people with hearing loss. It is tempting and intuitive to accept speech intelligibility scores as a proxy for listening effort, but this link is likely oversimplified and lacks actionable explanatory power. This study was conducted to explain the mechanisms of listening effort that are not captured by intelligibility scores, using sentence-repetition tasks where specific kinds of mistakes were prospectively planned or analyzed retrospectively. Effort measured as changes in pupil size among 20 listeners with normal hearing and 19 listeners with cochlear implants. Experiment 1 demonstrates that mental correction of misperceived words increases effort even when responses are correct. Experiment 2 shows that for incorrect responses, listening effort is not a function of the proportion of words correct but is rather driven by the types of errors, position of errors within a sentence, and the need to resolve ambiguity, reflecting how easily the listener can make sense of a perception. A simple taxonomy of error types is provided that is both intuitive and consistent with data from these two experiments. The diversity of errors in these experiments implies that speech perception tasks can be designed prospectively to elicit the mistakes that are more closely linked with effort. Although mental corrective action and number of mistakes can scale together in many experiments, it is possible to dissociate them to advance toward a more explanatory (rather than correlational) account of listening effort.
Collapse
Affiliation(s)
- Matthew B. Winn
- Matthew B. Winn, University of Minnesota, Twin Cities, 164 Pillsbury Dr SE, Minneapolis, MN Minnesota 55455, United States.
| | | |
Collapse
|
79
|
DeRoy Milvae K, Kuchinsky SE, Stakhovskaya OA, Goupell MJ. Dichotic listening performance and effort as a function of spectral resolution and interaural symmetry. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:920. [PMID: 34470337 PMCID: PMC8346288 DOI: 10.1121/10.0005653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 06/30/2021] [Accepted: 06/30/2021] [Indexed: 06/13/2023]
Abstract
One potential benefit of bilateral cochlear implants is reduced listening effort in speech-on-speech masking situations. However, the symmetry of the input across ears, possibly related to spectral resolution, could impact binaural benefits. Fifteen young adults with normal hearing performed digit recall with target and interfering digits presented to separate ears and attention directed to the target ear. Recall accuracy and pupil size over time (used as an index of listening effort) were measured for unprocessed, 16-channel vocoded, and 4-channel vocoded digits. Recall accuracy was significantly lower for dichotic (with interfering digits) than for monotic listening. Dichotic recall accuracy was highest when the target was less degraded and the interferer was more degraded. With matched target and interferer spectral resolution, pupil dilation was lower with more degradation. Pupil dilation grew more shallowly over time when the interferer had more degradation. Overall, interferer spectral resolution more strongly affected listening effort than target spectral resolution. These results suggest that interfering speech both lowers performance and increases listening effort, and that the relative spectral resolution of target and interferer affect the listening experience. Ignoring a clearer interferer is more effortful.
Collapse
Affiliation(s)
- Kristina DeRoy Milvae
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Olga A Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
80
|
Tracking Cognitive Spare Capacity During Speech Perception With EEG/ERP: Effects of Cognitive Load and Sentence Predictability. Ear Hear 2021; 41:1144-1157. [PMID: 32282402 DOI: 10.1097/aud.0000000000000856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVES Listening to speech in adverse listening conditions is effortful. Objective assessment of cognitive spare capacity during listening can serve as an index of the effort needed to understand speech. Cognitive spare capacity is influenced both by signal-driven demands posed by listening conditions and top-down demands intrinsic to spoken language processing, such as memory use and semantic processing. Previous research indicates that electrophysiological responses, particularly alpha oscillatory power, may index listening effort. However, it is not known how these indices respond to memory and semantic processing demands during spoken language processing in adverse listening conditions. The aim of the present study was twofold: first, to assess the impact of memory demands on electrophysiological responses during recognition of degraded, spoken sentences, and second, to examine whether predictable sentence contexts increase or decrease cognitive spare capacity during listening. DESIGN Cognitive demand was varied in a memory load task in which young adult participants (n = 20) viewed either low-load (one digit) or high-load (seven digits) sequences of digits, then listened to noise-vocoded spoken sentences that were either predictable or unpredictable, and then reported the final word of the sentence and the digits. Alpha oscillations in the frequency domain and event-related potentials in the time domain of the electrophysiological data were analyzed, as was behavioral accuracy for both words and digits. RESULTS Measured during sentence processing, event-related desynchronization of alpha power was greater (more negative) under high load than low load and was also greater for unpredictable than predictable sentences. A complementary pattern was observed for the P300/late positive complex (LPC) to sentence-final words, such that P300/LPC amplitude was reduced under high load compared with low load and for unpredictable compared with predictable sentences. Both words and digits were identified more quickly and accurately on trials in which spoken sentences were predictable. CONCLUSIONS Results indicate that during a sentence-recognition task, both cognitive load and sentence predictability modulate electrophysiological indices of cognitive spare capacity, namely alpha oscillatory power and P300/LPC amplitude. Both electrophysiological and behavioral results indicate that a predictive sentence context reduces cognitive demands during listening. Findings contribute to a growing literature on objective measures of cognitive demand during listening and indicate predictable sentence context as a top-down factor that can support ease of listening.
Collapse
|
81
|
Silcox JW, Payne BR. The costs (and benefits) of effortful listening on context processing: A simultaneous electrophysiology, pupillometry, and behavioral study. Cortex 2021; 142:296-316. [PMID: 34332197 DOI: 10.1016/j.cortex.2021.06.007] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2020] [Revised: 04/02/2021] [Accepted: 06/10/2021] [Indexed: 11/24/2022]
Abstract
There is an apparent disparity between the fields of cognitive audiology and cognitive electrophysiology as to how linguistic context is used when listening to perceptually challenging speech. To gain a clearer picture of how listening effort impacts context use, we conducted a pre-registered study to simultaneously examine electrophysiological, pupillometric, and behavioral responses when listening to sentences varying in contextual constraint and acoustic challenge in the same sample. Participants (N = 44) listened to sentences that were highly constraining and completed with expected or unexpected sentence-final words ("The prisoners were planning their escape/party") or were low-constraint sentences with unexpected sentence-final words ("All day she thought about the party"). Sentences were presented either in quiet or with +3 dB SNR background noise. Pupillometry and EEG were simultaneously recorded and subsequent sentence recognition and word recall were measured. While the N400 expectancy effect was diminished by noise, suggesting impaired real-time context use, we simultaneously observed a beneficial effect of constraint on subsequent recognition memory for degraded speech. Importantly, analyses of trial-to-trial coupling between pupil dilation and N400 amplitude showed that when participants' showed increased listening effort (i.e., greater pupil dilation), there was a subsequent recovery of the N400 effect, but at the same time, higher effort was related to poorer subsequent sentence recognition and word recall. Collectively, these findings suggest divergent effects of acoustic challenge and listening effort on context use: while noise impairs the rapid use of context to facilitate lexical semantic processing in general, this negative effect is attenuated when listeners show increased effort in response to noise. However, this effort-induced reliance on context for online word processing comes at the cost of poorer subsequent memory.
Collapse
Affiliation(s)
| | - Brennan R Payne
- Department of Psychology, University of Utah, USA; Interdepartmental Neuroscience Program, University of Utah, USA
| |
Collapse
|
82
|
Burg EA, Thakkar T, Fields T, Misurelli SM, Kuchinsky SE, Roche J, Lee DJ, Litovsky RY. Systematic Comparison of Trial Exclusion Criteria for Pupillometry Data Analysis in Individuals With Single-Sided Deafness and Normal Hearing. Trends Hear 2021; 25:23312165211013256. [PMID: 34024219 PMCID: PMC8150669 DOI: 10.1177/23312165211013256] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
The measurement of pupil dilation has become a common way to assess listening effort. Pupillometry data are subject to artifacts, requiring highly contaminated data to be discarded from analysis. It is unknown how trial exclusion criteria impact experimental results. The present study examined the effect of a common exclusion criterion, percentage of blinks, on speech intelligibility and pupil dilation measures in 9 participants with single-sided deafness (SSD) and 20 participants with normal hearing. Participants listened to and repeated sentences in quiet or with speech maskers. Pupillometry trials were processed using three levels of blink exclusion criteria: 15%, 30%, and 45%. These percentages reflect a threshold for missing data points in a trial, where trials that exceed the threshold are excluded from analysis. Results indicated that pupil dilation was significantly greater and intelligibility was significantly lower in the masker compared with the quiet condition for both groups. Across-group comparisons revealed that speech intelligibility in the SSD group decreased significantly more than the normal hearing group from quiet to masker conditions, but the change in pupil dilation was similar for both groups. There was no effect of blink criteria on speech intelligibility or pupil dilation results for either group. However, the total percentage of blinks in the masker condition was significantly greater than in the quiet condition for the SSD group, which is consistent with previous studies that have found a relationship between blinking and task difficulty. This association should be carefully considered in future experiments using pupillometry to gauge listening effort.
Collapse
Affiliation(s)
- Emily A Burg
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Tanvi Thakkar
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Taylor Fields
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Sara M Misurelli
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States.,Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, School of Medicine & Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States
| | - Joseph Roche
- Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, School of Medicine & Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| | - Daniel J Lee
- Massachusetts Eye and Ear Infirmary, Harvard Medical School, Boston, Massachusetts, United States
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin, United States.,Department of Surgery, Division of Otolaryngology - Head and Neck Surgery, School of Medicine & Public Health, University of Wisconsin-Madison, Madison, Wisconsin, United States
| |
Collapse
|
83
|
Shivhare YK, Sanjram PK. Less effortful auditory-motor synchronization with low-frequency tones in isochronous sound sequence. Neurosci Lett 2021; 756:135945. [PMID: 34019968 DOI: 10.1016/j.neulet.2021.135945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 05/04/2021] [Accepted: 05/04/2021] [Indexed: 10/21/2022]
Abstract
In music aided rehabilitation therapies like Rhythmic auditory stimulation (RAS), it is important for a subject to engage with isochronous sound sequence for efficient auditory-motor synchronization (AMS). This engagement will depend upon listening effort (which is the amount of cognitive resources needed to comprehend and synchronize with the isochronous sound sequence). Less effort will lead to more engagement. Frequency of tone and inter-stimulus interval (ISI) are two main elements of sound sequence which are likely to affect the synchronization accuracy and listening effort. This study examines the motor response of the participants to the tone and their listening effort involved in performing continuous tapping task. The emphasis is how the effect of frequency of the tone and inter-stimulus interval (ISI) affect synchronization error and listening effort in isochronous sound sequence. Thirty participants (aged, 18-35 years, M = 24.6 years) took part on a voluntary basis in this study. Their finger tapping responses and listening efforts were measured. Pupillary dilation was recorded using Tobii tx-30 eye tracker in order to analyze listening effort. The results suggest that the frequency of tone plays a crucial role in tapping performance and listening effort. In summary, this study demonstrates that there is better temporal alignment to low-frequency tones with lesser listening effort as compared to high-frequency tones.
Collapse
Affiliation(s)
- Yogesh Kumar Shivhare
- Human Factors & Applied Cognition Lab, Discipline of Biosciences and Biomedical Engineering, Indian Institute of Technology Indore, Simrol, Indore, 453552, India.
| | - Premjit Khanganba Sanjram
- Human Factors & Applied Cognition Lab, Discipline of Biosciences and Biomedical Engineering, and, Discipline of Psychology, Indian Institute of Technology Indore, Simrol, Indore, 453552, India.
| |
Collapse
|
84
|
Francis AL, Bent T, Schumaker J, Love J, Silbert N. Listener characteristics differentially affect self-reported and physiological measures of effort associated with two challenging listening conditions. Atten Percept Psychophys 2021; 83:1818-1841. [PMID: 33438149 PMCID: PMC8084824 DOI: 10.3758/s13414-020-02195-9] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/16/2020] [Indexed: 12/14/2022]
Abstract
Listeners vary in their ability to understand speech in adverse conditions. Differences in both cognitive and linguistic capacities play a role, but increasing evidence suggests that such factors may contribute differentially depending on the listening challenge. Here, we used multilevel modeling to evaluate contributions of individual differences in age, hearing thresholds, vocabulary, selective attention, working memory capacity, personality traits, and noise sensitivity to variability in measures of comprehension and listening effort in two listening conditions. A total of 35 participants completed a battery of cognitive and linguistic tests as well as a spoken story comprehension task using (1) native-accented English speech masked by speech-shaped noise and (2) nonnative accented English speech without masking. Masker levels were adjusted individually to ensure each participant would show (close to) equivalent word recognition performance across the two conditions. Dependent measures included comprehension tests results, self-rated effort, and electrodermal, cardiovascular, and facial electromyographic measures associated with listening effort. Results showed varied patterns of responsivity across different dependent measures as well as across listening conditions. In particular, results suggested that working memory capacity may play a greater role in the comprehension of nonnative accented speech than noise-masked speech, while hearing acuity and personality may have a stronger influence on physiological responses affected by demands of understanding speech in noise. Furthermore, electrodermal measures may be more strongly affected by affective response to noise-related interference while cardiovascular responses may be more strongly affected by demands on working memory and lexical access.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA.
| | - Tessa Bent
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Jennifer Schumaker
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, Lyles-Porter Hall, 715 Clinic Dr., West Lafayette, IN, 47907, USA
| | - Noah Silbert
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park, MD, USA
| |
Collapse
|
85
|
Kadem M, Herrmann B, Rodd JM, Johnsrude IS. Pupil Dilation Is Sensitive to Semantic Ambiguity and Acoustic Degradation. Trends Hear 2021; 24:2331216520964068. [PMID: 33124518 PMCID: PMC7607724 DOI: 10.1177/2331216520964068] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022] Open
Abstract
Speech comprehension is challenged by background noise, acoustic interference, and linguistic factors, such as the presence of words with more than one meaning (homonyms and homophones). Previous work suggests that homophony in spoken language increases cognitive demand. Here, we measured pupil dilation—a physiological index of cognitive demand—while listeners heard high-ambiguity sentences, containing words with more than one meaning, or well-matched low-ambiguity sentences without ambiguous words. This semantic-ambiguity manipulation was crossed with an acoustic manipulation in two experiments. In Experiment 1, sentences were masked with 30-talker babble at 0 and +6 dB signal-to-noise ratio (SNR), and in Experiment 2, sentences were heard with or without a pink noise masker at –2 dB SNR. Speech comprehension was measured by asking listeners to judge the semantic relatedness of a visual probe word to the previous sentence. In both experiments, comprehension was lower for high- than for low-ambiguity sentences when SNRs were low. Pupils dilated more when sentences included ambiguous words, even when no noise was added (Experiment 2). Pupil also dilated more when SNRs were low. The effect of masking was larger than the effect of ambiguity for performance and pupil responses. This work demonstrates that the presence of homophones, a condition that is ubiquitous in natural language, increases cognitive demand and reduces intelligibility of speech heard with a noisy background.
Collapse
Affiliation(s)
- Mason Kadem
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Biomedical Engineering, McMaster University, Hamilton, Ontario, Canada
| | - Björn Herrmann
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,Rotman Research Institute, Baycrest, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Jennifer M Rodd
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Ingrid S Johnsrude
- Department of Psychology, The University of Western Ontario, London, Ontario, Canada.,School of Communication and Speech Disorders, The University of Western Ontario, London, Ontario, Canada
| |
Collapse
|
86
|
Gross MC, Patel H, Kaushanskaya M. Processing of Code-Switched Sentences in Noise by Bilingual Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1283-1302. [PMID: 33788593 PMCID: PMC8608215 DOI: 10.1044/2020_jslhr-20-00388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Revised: 10/26/2020] [Accepted: 12/15/2020] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of the current study was to examine the effects of code-switching on bilingual children's online processing and offline comprehension of sentences in the presence of noise. In addition, the study examined individual differences in language ability and cognitive control skills as moderators of children's ability to process code-switched sentences in noise. Method The participants were 50 Spanish-English bilingual children, ages 7;0-11;8 (years;months). Children completed an auditory moving window task to examine whether they processed sentences with code-switching more slowly and less accurately than single-language sentences in the presence of noise. They completed the Dimensional Change Card Sort task to index cognitive control and standardized language measures in English and Spanish to index relative language dominance and overall language ability. Results Children were significantly less accurate in answering offline comprehension questions about code-switched sentences presented in noise compared to single-language sentences, especially for their dominant language. They also tended to exhibit slower processing speed, but costs did not reach significance. Language ability had an overall effect on offline comprehension but did not moderate the effects of code-switching. Cognitive control moderated the extent to which offline comprehension costs were affected by language dominance. Conclusions The findings of the current study suggest that code-switching, especially in the presence of background noise, may place additional demands on children's ability to comprehend sentences. However, it may be the processing of the nondominant language, rather than code-switching per se, that is especially difficult in the presence of noise.
Collapse
Affiliation(s)
- Megan C Gross
- Department of Communication Disorders, University of Massachusetts Amherst
| | | | | |
Collapse
|
87
|
Mesik J, Ray L, Wojtczak M. Effects of Age on Cortical Tracking of Word-Level Features of Continuous Competing Speech. Front Neurosci 2021; 15:635126. [PMID: 33867920 PMCID: PMC8047075 DOI: 10.3389/fnins.2021.635126] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 03/12/2021] [Indexed: 01/17/2023] Open
Abstract
Speech-in-noise comprehension difficulties are common among the elderly population, yet traditional objective measures of speech perception are largely insensitive to this deficit, particularly in the absence of clinical hearing loss. In recent years, a growing body of research in young normal-hearing adults has demonstrated that high-level features related to speech semantics and lexical predictability elicit strong centro-parietal negativity in the EEG signal around 400 ms following the word onset. Here we investigate effects of age on cortical tracking of these word-level features within a two-talker speech mixture, and their relationship with self-reported difficulties with speech-in-noise understanding. While undergoing EEG recordings, younger and older adult participants listened to a continuous narrative story in the presence of a distractor story. We then utilized forward encoding models to estimate cortical tracking of four speech features: (1) word onsets, (2) "semantic" dissimilarity of each word relative to the preceding context, (3) lexical surprisal for each word, and (4) overall word audibility. Our results revealed robust tracking of all features for attended speech, with surprisal and word audibility showing significantly stronger contributions to neural activity than dissimilarity. Additionally, older adults exhibited significantly stronger tracking of word-level features than younger adults, especially over frontal electrode sites, potentially reflecting increased listening effort. Finally, neuro-behavioral analyses revealed trends of a negative relationship between subjective speech-in-noise perception difficulties and the model goodness-of-fit for attended speech, as well as a positive relationship between task performance and the goodness-of-fit, indicating behavioral relevance of these measures. Together, our results demonstrate the utility of modeling cortical responses to multi-talker speech using complex, word-level features and the potential for their use to study changes in speech processing due to aging and hearing loss.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | | | |
Collapse
|
88
|
Ayasse ND, Hodson AJ, Wingfield A. The Principle of Least Effort and Comprehension of Spoken Sentences by Younger and Older Adults. Front Psychol 2021; 12:629464. [PMID: 33796047 PMCID: PMC8007979 DOI: 10.3389/fpsyg.2021.629464] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2020] [Accepted: 02/22/2021] [Indexed: 01/18/2023] Open
Abstract
There is considerable evidence that listeners' understanding of a spoken sentence need not always follow from a full analysis of the words and syntax of the utterance. Rather, listeners may instead conduct a superficial analysis, sampling some words and using presumed plausibility to arrive at an understanding of the sentence meaning. Because this latter strategy occurs more often for sentences with complex syntax that place a heavier processing burden on the listener than sentences with simpler syntax, shallow processing may represent a resource conserving strategy reflected in reduced processing effort. This factor may be even more important for older adults who as a group are known to have more limited working memory resources. In the present experiment, 40 older adults (M age = 75.5 years) and 20 younger adults (M age = 20.7) were tested for comprehension of plausible and implausible sentences with a simpler subject-relative embedded clause structure or a more complex object-relative embedded clause structure. Dilation of the pupil of the eye was recorded as an index of processing effort. Results confirmed greater comprehension accuracy for plausible than implausible sentences, and for sentences with simpler than more complex syntax, with both effects amplified for the older adults. Analysis of peak pupil dilations for implausible sentences revealed a complex three-way interaction between age, syntactic complexity, and plausibility. Results are discussed in terms of models of sentence comprehension, and pupillometry as an index of intentional task engagement.
Collapse
Affiliation(s)
| | | | - Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University, Waltham, MA, United States
| |
Collapse
|
89
|
Zhang Y, Lehmann A, Deroche M. Disentangling listening effort and memory load beyond behavioural evidence: Pupillary response to listening effort during a concurrent memory task. PLoS One 2021; 16:e0233251. [PMID: 33657100 PMCID: PMC7928507 DOI: 10.1371/journal.pone.0233251] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2020] [Accepted: 02/15/2021] [Indexed: 11/18/2022] Open
Abstract
Recent research has demonstrated that pupillometry is a robust measure for quantifying listening effort. However, pupillary responses in listening situations where multiple cognitive functions are engaged and sustained over a period of time remain hard to interpret. This limits our conceptualisation and understanding of listening effort in realistic situations, because rarely in everyday life are people challenged by one task at a time. Therefore, the purpose of this experiment was to reveal the dynamics of listening effort in a sustained listening condition using a word repeat and recall task. Words were presented in quiet and speech-shaped noise at different signal-to-noise ratios (SNR): 0dB, 7dB, 14dB and quiet. Participants were presented with lists of 10 words, and required to repeat each word after its presentation. At the end of the list, participants either recalled as many words as possible or moved on to the next list. Simultaneously, their pupil dilation was recorded throughout the whole experiment. When only word repeating was required, peak pupil dilation (PPD) was bigger in 0dB versus other conditions; whereas when recall was required, PPD showed no difference among SNR levels and PPD in 0dB was smaller than repeat-only condition. Baseline pupil diameter and PPD followed different variation patterns across the 10 serial positions within a block for conditions requiring recall: baseline pupil diameter built up progressively and plateaued in the later positions (but shot up when listeners were recalling the previously heard words from memory); PPD decreased at a pace quicker than in repeat-only condition. The current findings demonstrate that additional cognitive load during a speech intelligibility task could disturb the well-established relation between pupillary response and listening effort. Both the magnitude and temporal pattern of task-evoked pupillary response differ greatly in complex listening conditions, urging for more listening effort studies in complex and realistic listening situations.
Collapse
Affiliation(s)
- Yue Zhang
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- * E-mail:
| | - Alexandre Lehmann
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
| | - Mickael Deroche
- Department of Otolaryngology, McGill University, Montreal, Canada
- Centre for Research on Brain, Language and Music, Montreal, Canada
- Laboratory for Brain, Music and Sound Research, Montreal, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montreal, Canada
- Department of Psychology, Concordia University, Montreal, Canada
| |
Collapse
|
90
|
Lawrence RJ, Wiggins IM, Hodgson JC, Hartley DEH. Evaluating cortical responses to speech in children: A functional near-infrared spectroscopy (fNIRS) study. Hear Res 2021; 401:108155. [PMID: 33360183 PMCID: PMC7937787 DOI: 10.1016/j.heares.2020.108155] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 10/20/2020] [Accepted: 12/10/2020] [Indexed: 10/28/2022]
Abstract
Functional neuroimaging of speech processing has both research and clinical potential. This work is facilitating an ever-increasing understanding of the complex neural mechanisms involved in the processing of speech. Neural correlates of speech understanding also have potential clinical value, especially for infants and children, in whom behavioural assessments can be unreliable. Such measures would not only benefit normally hearing children experiencing speech and language delay, but also hearing impaired children with and without hearing devices. In the current study, we examined cortical correlates of speech intelligibility in normally hearing paediatric listeners. Cortical responses were measured using functional near-infrared spectroscopy (fNIRS), a non-invasive neuroimaging technique that is fully compatible with hearing devices, including cochlear implants. In nineteen normally hearing children (aged 6 - 13 years) we measured activity in temporal and frontal cortex bilaterally whilst participants listened to both clear- and noise-vocoded sentences targeting four levels of speech intelligibility. Cortical activation in superior temporal and inferior frontal cortex was generally stronger in the left hemisphere than in the right. Activation in left superior temporal cortex grew monotonically with increasing speech intelligibility. In the same region, we identified a trend towards greater activation on correctly vs. incorrectly perceived trials, suggesting a possible sensitivity to speech intelligibility per se, beyond sensitivity to changing acoustic properties across stimulation conditions. Outside superior temporal cortex, we identified other regions in which fNIRS responses varied with speech intelligibility. For example, channels overlying posterior middle temporal regions in the right hemisphere exhibited relative deactivation during sentence processing (compared to a silent baseline condition), with the amplitude of that deactivation being greater in more difficult listening conditions. This finding may represent sensitivity to components of the default mode network in lateral temporal regions, and hence effortful listening in normally hearing paediatric listeners. Our results indicate that fNIRS has the potential to provide an objective marker of speech intelligibility in normally hearing children. Should these results be found to apply to individuals experiencing language delay or to those listening through a hearing device, such as a cochlear implant, fNIRS may form the basis of a clinically useful measure of speech understanding.
Collapse
Affiliation(s)
- Rachael J Lawrence
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham NG7 2UH, United Kingdom.
| | - Ian M Wiggins
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom
| | - Jessica C Hodgson
- Lincoln Medical School - Universities of Nottingham and Lincoln, Charlotte Scott Building, University of Lincoln, Lincoln LN6 7TS, United Kingdom
| | - Douglas E H Hartley
- National Institute for Health Research (NIHR), Nottingham Biomedical Research Centre, Ropewalk House, 113 The Ropewalk, Nottingham NG1 5DU, United Kingdom; Hearing Sciences, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, United Kingdom; Nottingham University Hospitals NHS Trust, Derby Road, Nottingham NG7 2UH, United Kingdom
| |
Collapse
|
91
|
McLaughlin DJ, Braver TS, Peelle JE. Measuring the Subjective Cost of Listening Effort Using a Discounting Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:337-347. [PMID: 33439751 PMCID: PMC8632478 DOI: 10.1044/2020_jslhr-20-00086] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
Purpose Objective measures of listening effort have been gaining prominence, as they provide metrics to quantify the difficulty of understanding speech under a variety of circumstances. A key challenge has been to develop paradigms that enable the complementary measurement of subjective listening effort in a quantitatively precise manner. In this study, we introduce a novel decision-making paradigm to examine age-related and individual differences in subjective effort during listening. Method Older and younger adults were presented with spoken sentences mixed with speech-shaped noise at multiple signal-to-noise ratios (SNRs). On each trial, subjects were offered the choice between completing an easier listening trial (presented at +20 dB SNR) for a smaller monetary reward and completing a harder listening trial (presented at either +4, 0, -4, -8, or -12 dB SNR) for a greater monetary reward. By varying the amount of the reward offered for the easier option, the subjective value of performing effortful listening trials at each SNR could be assessed. Results Older adults discounted the value of effortful listening to a greater degree than young adults, opting to accept less money in order to avoid more difficult SNRs. Additionally, older adults with poorer hearing and smaller working memory capacities were more likely to choose easier trials; however, in younger adults, no relationship with hearing or working memory was found. Self-reported measures of economic status did not affect these relationships. Conclusions These findings suggest that subjective listening effort depends on factors including, but not necessarily limited to, hearing and working memory. Additionally, this study demonstrates that economic decision-making paradigms can be a useful approach for assessing subjective listening effort and may prove beneficial in future research.
Collapse
Affiliation(s)
- Drew J. McLaughlin
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | - Todd S. Braver
- Department of Psychological & Brain Sciences, Washington University in St. Louis, MO
| | | |
Collapse
|
92
|
Saderi D, Schwartz ZP, Heller CR, Pennington JR, David SV. Dissociation of task engagement and arousal effects in auditory cortex and midbrain. eLife 2021; 10:e60153. [PMID: 33570493 PMCID: PMC7909948 DOI: 10.7554/elife.60153] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Accepted: 02/10/2021] [Indexed: 12/18/2022] Open
Abstract
Both generalized arousal and engagement in a specific task influence sensory neural processing. To isolate effects of these state variables in the auditory system, we recorded single-unit activity from primary auditory cortex (A1) and inferior colliculus (IC) of ferrets during a tone detection task, while monitoring arousal via changes in pupil size. We used a generalized linear model to assess the influence of task engagement and pupil size on sound-evoked activity. In both areas, these two variables affected independent neural populations. Pupil size effects were more prominent in IC, while pupil and task engagement effects were equally likely in A1. Task engagement was correlated with larger pupil; thus, some apparent effects of task engagement should in fact be attributed to fluctuations in pupil size. These results indicate a hierarchy of auditory processing, where generalized arousal enhances activity in midbrain, and effects specific to task engagement become more prominent in cortex.
Collapse
Affiliation(s)
- Daniela Saderi
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Zachary P Schwartz
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Charles R Heller
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
- Neuroscience Graduate Program, Oregon Health and Science UniversityPortlandUnited States
| | - Jacob R Pennington
- Department of Mathematics and Statistics, Washington State UniversityVancouverUnited States
| | - Stephen V David
- Oregon Hearing Research Center, Oregon Health and Science UniversityPortlandUnited States
| |
Collapse
|
93
|
Cucis PA, Berger-Vachon C, Thaï-Van H, Hermann R, Gallego S, Truy E. Word Recognition and Frequency Selectivity in Cochlear Implant Simulation: Effect of Channel Interaction. J Clin Med 2021; 10:jcm10040679. [PMID: 33578696 PMCID: PMC7916371 DOI: 10.3390/jcm10040679] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Revised: 02/02/2021] [Accepted: 02/05/2021] [Indexed: 11/16/2022] Open
Abstract
In cochlear implants (CI), spread of neural excitation may produce channel interaction. Channel interaction disturbs the spectral resolution and, among other factors, seems to impair speech recognition, especially in noise. In this study, two tests were performed with 20 adult normal-hearing (NH) subjects under different vocoded simulations. First, there was a measurement of word recognition in noise while varying the number of selected channels (4, 8, 12 or 16 maxima out of 20) and the degree of simulated channel interaction (“Low”, “Medium” and “High”). Then, there was an evaluation of spectral resolution function of the degree of simulated channel interaction, reflected by the sharpness (Q10dB) of psychophysical tuning curves (PTCs). The results showed a significant effect of the simulated channel interaction on word recognition but did not find an effect of the number of selected channels. The intelligibility decreased significantly for the highest degree of channel interaction. Similarly, the highest simulated channel interaction impaired significantly the Q10dB. Additionally, a strong intra-individual correlation between frequency selectivity and word recognition in noise was observed. Lastly, the individual changes in frequency selectivity were positively correlated with the changes in word recognition when the degree of interaction went from “Low” to “High”. To conclude, the degradation seen for the highest degree of channel interaction suggests a threshold effect on frequency selectivity and word recognition. The correlation between frequency selectivity and intelligibility in noise supports the hypothesis that PTCs Q10dB can account for word recognition in certain conditions. Moreover, the individual variations of performances observed among subjects suggest that channel interaction does not have the same effect on each individual. Finally, these results highlight the importance of taking into account subjects’ individuality and to evaluate channel interaction through the speech processor.
Collapse
Affiliation(s)
- Pierre-Antoine Cucis
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
- Correspondence: ; Tel.: +33-472-110-0518
| | - Christian Berger-Vachon
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Brain Dynamics and Cognition Team (DYCOG), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France
- Biomechanics and Impact Mechanics Laboratory (LBMC), French Institute of Science and Technology for Transport, Development and Networks (IFSTTAR), Gustave Eiffel University, 69675 Bron, France
| | - Hung Thaï-Van
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Paris Hearing Institute, Institut Pasteur, Inserm U1120, 75015 Paris, France
- Department of Audiology and Otoneurological Evaluation, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| | - Ruben Hermann
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| | - Stéphane Gallego
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- Neuronal Dynamics and Audition Team (DNA), Laboratory of Cognitive Neuroscience (LNSC), CNRS UMR 7291, Aix-Marseille University, CEDEX 3, 13331 Marseille, France
| | - Eric Truy
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Lyon Neuroscience Research Center, CRNL Inserm U1028, CNRS UMR5292, 69675 Bron, France; (R.H.); (E.T.)
- Claude Bernard Lyon 1 University, 69100 Villeurbanne, France; (C.B.-V.); (H.T.-V.); (S.G.)
- ENT and Cervico-Facial Surgery Department, Edouard Herriot Hospital, Hospices Civils de Lyon, 69003 Lyon, France
| |
Collapse
|
94
|
Pupillometry as a reliable metric of auditory detection and discrimination across diverse stimulus paradigms in animal models. Sci Rep 2021; 11:3108. [PMID: 33542266 PMCID: PMC7862232 DOI: 10.1038/s41598-021-82340-y] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2020] [Accepted: 01/08/2021] [Indexed: 12/30/2022] Open
Abstract
Estimates of detection and discrimination thresholds are often used to explore broad perceptual similarities between human subjects and animal models. Pupillometry shows great promise as a non-invasive, easily-deployable method of comparing human and animal thresholds. Using pupillometry, previous studies in animal models have obtained threshold estimates to simple stimuli such as pure tones, but have not explored whether similar pupil responses can be evoked by complex stimuli, what other stimulus contingencies might affect stimulus-evoked pupil responses, and if pupil responses can be modulated by experience or short-term training. In this study, we used an auditory oddball paradigm to estimate detection and discrimination thresholds across a wide range of stimuli in guinea pigs. We demonstrate that pupillometry yields reliable detection and discrimination thresholds across a range of simple (tones) and complex (conspecific vocalizations) stimuli; that pupil responses can be robustly evoked using different stimulus contingencies (low-level acoustic changes, or higher level categorical changes); and that pupil responses are modulated by short-term training. These results lay the foundation for using pupillometry as a reliable method of estimating thresholds in large experimental cohorts, and unveil the full potential of using pupillometry to explore broad similarities between humans and animal models.
Collapse
|
95
|
Książek P, Zekveld AA, Wendt D, Fiedler L, Lunner T, Kramer SE. Effect of Speech-to-Noise Ratio and Luminance on a Range of Current and Potential Pupil Response Measures to Assess Listening Effort. Trends Hear 2021; 25:23312165211009351. [PMID: 33926329 PMCID: PMC8111552 DOI: 10.1177/23312165211009351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2020] [Revised: 02/11/2021] [Accepted: 03/05/2021] [Indexed: 11/17/2022] Open
Abstract
In hearing research, pupillometry is an established method of studying listening effort. The focus of this study was to evaluate several pupil measures extracted from the Task-Evoked Pupil Responses (TEPRs) in speech-in-noise test. A range of analysis approaches was applied to extract these pupil measures, namely (a) pupil peak dilation (PPD); (b) mean pupil dilation (MPD); (c) index of pupillary activity; (d) growth curve analysis (GCA); and (e) principal component analysis (PCA). The effect of signal-to-noise ratio (SNR; Data Set A: -20 dB, -10 dB, +5 dB SNR) and luminance (Data Set B: 0.1 cd/m2, 360 cd/m2) on the TEPRs were investigated. Data Sets A and B were recorded during a speech-in-noise test and included TEPRs from 33 and 27 normal-hearing native Dutch speakers, respectively. The main results were as follows: (a) A significant effect of SNR was revealed for all pupil measures extracted in the time domain (PPD, MPD, GCA, PCA); (b) Two time series analysis approaches (GCA, PCA) provided modeled temporal profiles of TEPRs (GCA); and time windows spanning subtasks performed in a speech-in-noise test (PCA); and (c) All pupil measures revealed a significant effect of luminance. In conclusion, multiple pupil measures showed similar effects of SNR, suggesting that effort may be reflected in multiple aspects of TEPR. Moreover, a direct analysis of the pupil time course seems to provide a more holistic view of TEPRs, yet further research is needed to understand and interpret its measures. Further research is also required to find pupil measures less sensitive to changes in luminance.
Collapse
Affiliation(s)
- Patrycja Książek
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A. Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | | | | | - Sophia E. Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| |
Collapse
|
96
|
Russo FY, Hoen M, Karoui C, Demarcy T, Ardoint M, Tuset MP, De Seta D, Sterkers O, Lahlou G, Mosnier I. Pupillometry Assessment of Speech Recognition and Listening Experience in Adult Cochlear Implant Patients. Front Neurosci 2020; 14:556675. [PMID: 33240035 PMCID: PMC7677588 DOI: 10.3389/fnins.2020.556675] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 09/29/2020] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of the present study was to investigate the pupillary response to word identification in cochlear implant (CI) patients. Authors hypothesized that when task difficulty (i.e., addition of background noise) increased, pupil dilation markers such as the peak dilation or the latency of the peak dilation would increase in CI users, as already observed in normal-hearing and hearing-impaired subjects. Methods Pupillometric measures in 10 CI patients were combined to standard speech recognition scores used to evaluate CI outcomes, namely, speech audiometry in quiet and in noise at +10 dB signal-to-noise ratio (SNR). The main outcome measures of pupillometry were mean pupil dilation, maximal pupil dilation, dilation latency, and mean dilation during return to baseline or retention interval. Subjective hearing quality was evaluated by means of one self-reported fatigue questionnaire, and the Speech, Spatial, and Qualities (SSQ) of Hearing scale. Results All pupil dilation data were transformed to percent change in event-related pupil dilation (ERPD, %). Analyses show that the peak amplitudes for both mean pupil dilation and maximal pupil dilation were higher during the speech-in-noise test. Mean peak dilation was measured at 3.47 ± 2.29% noise vs. 2.19 ± 2.46 in quiet and maximal peak value was detected at 9.17 ± 3.25% in noise vs. 8.72 ± 2.93% in quiet. Concerning the questionnaires, the mean pupil dilation during the retention interval was significantly correlated with the spatial subscale score of the SSQ Hearing scale [r(8) = −0.84, p = 0.0023], and with the global score [r(8) = −0.78, p = 0.0018]. Conclusion The analysis of pupillometric traces, obtained during speech audiometry in quiet and in noise in CI users, provided interesting information about the different processes engaged in this task. Pupillometric measures could be indicative of listening difficulty, phoneme intelligibility, and were correlated with general hearing experience as evaluated by the SSQ of Hearing scale. These preliminary results show that pupillometry constitutes a promising tool to improve objective quantification of CI performance in clinical settings.
Collapse
Affiliation(s)
- Francesca Yoshie Russo
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,Department of Sense Organs, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | | | | | | | | | - Maria-Pia Tuset
- Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| | - Daniele De Seta
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France.,Department of Sense Organs, Faculty of Medicine and Dentistry, Sapienza University of Rome, Rome, Italy
| | - Olivier Sterkers
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| | - Ghizlène Lahlou
- INSERM U1120 Génétique et Physiologie de l'Audition, Paris, France.,APHP Sorbonne Université, Service ORL, GH Pitié Salpêtrière, Paris, France
| | - Isabelle Mosnier
- INSERM U1159 Réhabilitation Chirurgicale Mini-Invasive Robotisée De l'Audition, Paris, France.,Assistance Publique Hôpitaux de Paris Sorbonne Université, Service Oto-Rhino-Laryngologie (ORL), Unité Fonctionnelle Implants Auditifs, Groupe Hospitalier Pitié-Salpêtrière, Paris, France
| |
Collapse
|
97
|
Zhao S, Bury G, Milne A, Chait M. Pupillometry as an Objective Measure of Sustained Attention in Young and Older Listeners. Trends Hear 2020; 23:2331216519887815. [PMID: 31775578 PMCID: PMC6883360 DOI: 10.1177/2331216519887815] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
The ability to sustain attention on a task-relevant sound source while avoiding
distraction from concurrent sounds is fundamental to listening in crowded
environments. We aimed to (a) devise an experimental paradigm with which this
aspect of listening can be isolated and (b) evaluate the applicability of
pupillometry as an objective measure of sustained attention in young and older
populations. We designed a paradigm that continuously measured behavioral
responses and pupillometry during 25-s trials. Stimuli contained a number of
concurrent, spectrally distinct tone streams. On each trial, participants
detected gaps in one of the streams while resisting distraction from the others.
Behavior demonstrated increasing difficulty with time-on-task and with
number/proximity of distractor streams. In young listeners
(N = 20; aged 18 to 35 years), pupil diameter (on the group and
individual level) was dynamically modulated by instantaneous task difficulty:
Periods where behavioral performance revealed a strain on sustained attention
were accompanied by increased pupil diameter. Only trials on which participants
performed successfully were included in the pupillometry analysis so that the
observed effects reflect task demands as opposed to failure to attend. In line
with existing reports, we observed global changes to pupil dynamics in the older
group (N = 19; aged 63 to 79 years) including decreased pupil
diameter, limited dilation range, and reduced temporal variability. However,
despite these changes, older listeners showed similar effects of attentive
tracking to those observed in the young listeners. Overall, our results
demonstrate that pupillometry can be a reliable and time-sensitive measure of
attentive tracking over long durations in both young and (with caveats) older
listeners.
Collapse
Affiliation(s)
- Sijia Zhao
- Ear Institute, University College London, UK
| | | | - Alice Milne
- Ear Institute, University College London, UK
| | - Maria Chait
- Ear Institute, University College London, UK
| |
Collapse
|
98
|
Abstract
OBJECTIVES Slowed speaking rate was examined for its effects on speech intelligibility, its interaction with the benefit of contextual cues, and the impact of these factors on listening effort in adults with cochlear implants. DESIGN Participants (n = 21 cochlear implant users) heard high- and low-context sentences that were played at the original speaking rate, as well as a slowed (1.4× duration) speaking rate, using uniform pitch-synchronous time warping. In addition to intelligibility measures, changes in pupil dilation were measured as a time-varying index of processing load or listening effort. Slope of pupil size recovery to baseline after the sentence was used as an index of resolution of perceptual ambiguity. RESULTS Speech intelligibility was better for high-context compared to low-context sentences and slightly better for slower compared to original-rate speech. Speech rate did not affect magnitude and latency of peak pupil dilation relative to sentence offset. However, baseline pupil size recovered more substantially for slower-rate sentences, suggesting easier processing in the moment after the sentence was over. The effect of slowing speech rate was comparable to changing a sentence from low context to high context. The effect of context on pupil dilation was not observed until after the sentence was over, and one of two analyses suggested that context had greater beneficial effects on listening effort when the speaking rate was slower. These patterns maintained even at perfect sentence intelligibility, suggesting that correct speech repetition does not guarantee efficient or effortless processing. With slower speaking rates, there was less variability in pupil dilation slopes following the sentence, implying mitigation of some of the difficulties shown by individual listeners who would otherwise demonstrate prolonged effort after a sentence is heard. CONCLUSIONS Slowed speaking rate provides release from listening effort when hearing an utterance, particularly relieving effort that would have lingered after a sentence is over. Context arguably provides even more release from listening effort when speaking rate is slower. The pattern of prolonged pupil dilation for faster speech is consistent with increased need to mentally correct errors, although that exact interpretation cannot be verified with intelligibility data alone or with pupil data alone. A pattern of needing to dwell on a sentence to disambiguate misperceptions likely contributes to difficulty in running conversation where there are few opportunities to pause and resolve recently heard utterances.
Collapse
|
99
|
Abstract
OBJECTIVES The objective of this study was to evaluate the sensitivity and reliability of one subjective (rating scale) and three objective (dual-task paradigm, pupillometry, and skin conductance response amplitude) measures of listening effort across multiple signal to noise ratios (SNRs). DESIGN Twenty adults with normal hearing attended two sessions and listened to sentences presented in quiet and in stationary noise at three different SNRs: 0, -3, and -5 dB. Listening effort was assessed by examining change in reaction time (dual-task paradigm), change in peak to peak pupil diameter (pupillometry), and change in mean skin conductance response amplitude; self-reported listening effort on a scale from 0 to 100 was also evaluated. Responses were averaged within each SNR and based on three word recognition ability categories (≤50%, 51% to 71%, and >71%) across all SNRs. Measures were considered reliable if there were no significant changes between sessions, and intraclass correlation coefficients were a minimum of 0.40. Effect sizes were calculated to compare the sensitivity of the measures. RESULTS Intraclass correlation coefficient values indicated fair-to-moderate reliability for all measures while individual measurement sensitivity was variable. Self-reports were sensitive to listening effort but were less reliable, given that subjective effort was greater during the dual task than either of the physiologic measures. The dual task was sensitive to a narrow range of word recognition abilities but was less reliable as it exhibited a global decrease in reaction time across sessions. Pupillometry was consistently sensitive and reliable to changes in listening effort. Skin conductance response amplitude was not sensitive or reliable while the participants listened to the sentences. Skin conductance response amplitude during the verbal response was sensitive to poor (≤50%) speech recognition abilities; however, it was less reliable as there was a significant change in amplitude across sessions. CONCLUSIONS In this study, pupillometry was the most sensitive and reliable objective measure of listening effort. Intersession variability significantly influenced the other objective measures of listening effort, which suggests challenges for cross-study comparability. Therefore, intraclass correlation coefficients combined with other statistical tests more fully describe the reliability of measures of listening effort across multiple difficulties. Minimizing intersession variability will increase measurement sensitivity. Further work toward standardized methods and analysis will strengthen our understanding of the reliability and sensitivity of measures of listening effort and better facilitate cross-modal and cross-study comparisons.
Collapse
|
100
|
Cho YS, Park SY, Seol HY, Lim JH, Cho YS, Hong SH, Moon IJ. Clinical Performance Evaluation of a Personal Sound Amplification Product vs a Basic Hearing Aid and a Premium Hearing Aid. JAMA Otolaryngol Head Neck Surg 2020; 145:516-522. [PMID: 31095263 DOI: 10.1001/jamaoto.2019.0667] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Importance Hearing loss is a highly prevalent condition with multiple negative associated outcomes, yet few persons with hearing loss have hearing aids (HAs). Personal sound amplification products (PSAPs) could be an alternative low-cost solution to HAs, but data are lacking on the performance of PSAPs. Objective To evaluate the clinical efficacy of a PSAP by comparing its performance with that of a basic HA and a premium HA in participants with mild, moderate, and moderately severe hearing impairment. Design, Setting, and Participants A prospective, single-institution cohort study was performed with a total of 56 participants, including 19 with mild hearing loss, 20 with moderate hearing loss, and 17 with moderately severe hearing loss. All participants underwent 4 clinical hearing tests with each of the PSAP, basic HA, and premium HA, and all completed an evaluative questionnaire. Interventions All hearing devices (PSAP, basic HA, and premium HA) were applied by a clinician to prevent bias and order effects; participants were blinded to the device in use, and sequence of devices was randomized. Main Outcomes and Measures The study used the Korean version of the hearing in noise test, the speech intelligibility in noise test, listening effort measurement using a dual-task paradigm, pupillometry, and a self-rating questionnaire regarding sound quality and preference. These tests were administered under the following 4 hearing conditions: unaided hearing, use of PSAP, use of basic HA, and use of premium HA. Results The study included 56 participants with a mean age of 56 years (interquartile range, 48-59 years); 29 (52%) were women. In the mild and moderate hearing loss groups, there was no meaningful difference between PSAP, basic HA, and premium HA for speech perception (Cohen d = 0.06-1.05), sound quality (Cohen d = 0.06-0.71), listening effort (Cohen d = 0.10-0.92), and user preference (PSAP, 41%; basic HA, 28%; premium HA, 31%). However, for the patients with moderately severe hearing loss, the premium HA had better performance across most tests (Cohen d = 0.60-1.59), and 70% of participants preferred to use the premium HA. Conclusions and Relevance The results indicate that basic and premium HAs were not superior to the PSAP in patients with mild to moderate hearing impairment, which suggests that PSAPs might be used as an alternative to HAs in these patient populations. However, if hearing loss is more severe, then HAs, especially premium HAs, should be considered as an option to manage hearing loss.
Collapse
Affiliation(s)
- Young Sang Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Medical Center, Seoul, South Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea
| | - Su Yeon Park
- Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea
| | - Hye Yoon Seol
- Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea
| | - Ji Hyun Lim
- Center for Clinical Epidemiology, Samsung Medical Center, Seoul, South Korea
| | - Yang-Sun Cho
- Department of Otorhinolaryngology-Head and Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Medical Center, Seoul, South Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea
| | - Sung Hwa Hong
- Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Changwon Hospital, Changwon, South Korea
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Sungkyunkwan University School of Medicine, Samsung Medical Center, Seoul, South Korea.,Hearing Research Laboratory, Samsung Medical Center, Seoul, South Korea
| |
Collapse
|