51
|
Kuk F, Slugocki C, Ruperto N, Korhonen P. Performance of normal-hearing listeners on the Repeat-Recall test in different noise configurations. Int J Audiol 2020; 60:35-43. [PMID: 32820697 DOI: 10.1080/14992027.2020.1807626] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
OBJECTIVE This study measured the performance of normal-hearing listeners on the Repeat-Recall Test (RRT) in two noise types (2-talker babble [2TBN] and continuous speech-shaped noise [SSN]) by two noise azimuths (0° and 180°) configurations at signal-to-noise ratios (SNRs) of 0, 5, 10, and 15 dB and quiet. DESIGN Within-subject repeated measures. STUDY SAMPLE Twenty-one listeners with normal hearing who also passed cognitive screening were tested in the sound-field with the speech stimulus presented from 0° at 75 dB SPL in 4 noise configurations. The order of SNRs, noise configurations, and RRT topic conditions was counterbalanced across listeners. RESULTS Analysis revealed that repeat scores were significantly better for 2TBN, for noise at 180°, and for high context (HC) sentences. Recall performance was significantly better for SSN and HC sentences. Listening effort ratings were higher for SSN and for noise front condition at SNR ≤ 10 dB. The 2TBN noise was tolerated longer than SSN. Performance on all measures improved with SNRs. CONCLUSIONS These data showed performance differences among noise configurations and provided a preliminary basis for comparison with hearing-impaired listeners' performance on the RRT.
Collapse
Affiliation(s)
- Francis Kuk
- Widex Office of Research in Clinical Amplification (ORCA-USA), Lisle, IL, USA
| | | | - Neal Ruperto
- Widex Office of Research in Clinical Amplification (ORCA-USA), Lisle, IL, USA
| | - Petri Korhonen
- Widex Office of Research in Clinical Amplification (ORCA-USA), Lisle, IL, USA
| |
Collapse
|
52
|
Venezia JH, Leek MR, Lindeman MP. Suprathreshold Differences in Competing Speech Perception in Older Listeners With Normal and Impaired Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2141-2161. [PMID: 32603618 DOI: 10.1044/2020_jslhr-19-00324] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Age-related declines in auditory temporal processing and cognition make older listeners vulnerable to interference from competing speech. This vulnerability may be increased in older listeners with sensorineural hearing loss due to additional effects of spectral distortion and accelerated cognitive decline. The goal of this study was to uncover differences between older hearing-impaired (OHI) listeners and older normal-hearing (ONH) listeners in the perceptual encoding of competing speech signals. Method Age-matched groups of 10 OHI and 10 ONH listeners performed the coordinate response measure task with a synthetic female target talker and a male competing talker at a target-to-masker ratio of +3 dB. Individualized gain was provided to OHI listeners. Each listener completed 50 baseline and 800 "bubbles" trials in which randomly selected segments of the speech modulation power spectrum (MPS) were retained on each trial while the remainder was filtered out. Average performance was fixed at 50% correct by adapting the number of segments retained. Multinomial regression was used to estimate weights showing the regions of the MPS associated with performance (a "classification image" or CImg). Results The CImg weights were significantly different between the groups in two MPS regions: a region encoding the shared phonetic content of the two talkers and a region encoding the competing (male) talker's voice. The OHI listeners demonstrated poorer encoding of the phonetic content and increased vulnerability to interference from the competing talker. Individual differences in CImg weights explained over 75% of the variance in baseline performance in the OHI listeners, whereas differences in high-frequency pure-tone thresholds explained only 10%. Conclusion Suprathreshold deficits in the encoding of low- to mid-frequency (~5-10 Hz) temporal modulations-which may reflect poorer "dip listening"-and auditory grouping at a perceptual and/or cognitive level are responsible for the relatively poor performance of OHI versus ONH listeners on a different-gender competing speech task. Supplemental Material https://doi.org/10.23641/asha.12568472.
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, CA
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Loma Linda University, CA
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, CA
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Loma Linda University, CA
| | | |
Collapse
|
53
|
Micula A, Ning Ng EH, El-Azm F, Rönnberg J. The effects of task difficulty, background noise and noise reduction on recall. Int J Audiol 2020; 59:792-800. [PMID: 32564633 DOI: 10.1080/14992027.2020.1771441] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
OBJECTIVE In the present study, we investigated whether varying the task difficulty of the Sentence-Final Word Identification and Recall (SWIR) Test has an effect on the benefit of noise reduction, as well as whether task difficulty predictability affects recall. The relationship between working memory and recall was examined. DESIGN Task difficulty was manipulated by varying the list length with noise reduction on and off in competing speech and speech-shaped noise. Half of the participants were informed about list length in advance. Working memory capacity was measured using the Reading Span. STUDY SAMPLE Thirty-two experienced hearing aid users with moderate sensorineural hearing loss. RESULTS Task difficulty did not affect the noise reduction benefit and task difficulty predictability did not affect recall. Participants may have employed a different recall strategy when task difficulty was unpredictable and noise reduction off. Reading Span scores positively correlated with the SWIR test. Noise reduction improved recall in competing speech. CONCLUSIONS The SWIR test with varying list length is suitable for detecting the benefit of noise reduction. The correlation with working memory suggests that the SWIR test could be modified to be adaptive to individual cognitive capacity. The results on noise and noise reduction replicate previous findings.
Collapse
Affiliation(s)
- Andreea Micula
- Oticon A/S, Smørum, Denmark.,Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden.,Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark.,Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden.,Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
54
|
Kessler DM, Wolfe J, Blanchard M, Gifford RH. Clinical Application of Spectral Modulation Detection: Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1561-1571. [PMID: 32379527 PMCID: PMC7842114 DOI: 10.1044/2020_jslhr-19-00304] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 12/16/2019] [Accepted: 01/27/2020] [Indexed: 05/29/2023]
Abstract
Purpose The purpose of this study was to investigate the relationship between speech recognition benefit derived from the addition of a hearing aid (HA) to the nonimplanted ear (i.e., bimodal benefit) and spectral modulation detection (SMD) performance in the nonimplanted ear in a large clinical sample. An additional purpose was to investigate the influence of low-frequency pure-tone average (PTA) of the nonimplanted ear and age at implantation on the variance in bimodal benefit. Method Participants included 311 unilateral cochlear implant (CI) users who wore an HA in the nonimplanted ear. Participants completed speech recognition testing in quiet and in noise with the CI-alone and in the bimodal condition (i.e., CI and contralateral HA) and SMD in the nonimplanted ear. Results SMD performance in the nonimplanted ear was significantly correlated with bimodal benefit in quiet and in noise. However, this relationship was much weaker than previous reports with smaller samples. SMD, low-frequency PTA of the nonimplanted ear from 125 to 750 Hz, and age at implantation together accounted for, at most, 19.1% of the variance in bimodal benefit. Conclusions Taken together, SMD, low-frequency PTA, and age at implantation account for the greatest amount of variance in bimodal benefit than each variable alone. A large portion of variance (~80%) in bimodal benefit is not explained by these variables. Supplemental Material https://doi.org/10.23641/asha.12185493.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | | | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
55
|
Souza P, Arehart K, Schoof T, Anderson M, Strori D, Balmert L. Understanding Variability in Individual Response to Hearing Aid Signal Processing in Wearable Hearing Aids. Ear Hear 2020; 40:1280-1292. [PMID: 30998547 PMCID: PMC6786927 DOI: 10.1097/aud.0000000000000717] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVES Previous work has suggested that individual characteristics, including amount of hearing loss, age, and working memory ability, may affect response to hearing aid signal processing. The present study aims to extend work using metrics to quantify cumulative signal modifications under simulated conditions to real hearing aids worn in everyday listening environments. Specifically, the goal was to determine whether individual factors such as working memory, age, and degree of hearing loss play a role in explaining how listeners respond to signal modifications caused by signal processing in real hearing aids, worn in the listener's everyday environment, over a period of time. DESIGN Participants were older adults (age range 54-90 years) with symmetrical mild-to-moderate sensorineural hearing loss. We contrasted two distinct hearing aid fittings: one designated as mild signal processing and one as strong signal processing. Forty-nine older adults were enrolled in the study and 35 participants had valid outcome data for both hearing aid fittings. The difference between the two settings related to the wide dynamic range compression and frequency compression features. Order of fittings was randomly assigned for each participant. Each fitting was worn in the listener's everyday environments for approximately 5 weeks before outcome measurements. The trial was double blind, with neither the participant nor the tester aware of the specific fitting at the time of the outcome testing. Baseline measures included a full audiometric evaluation as well as working memory and spectral and temporal resolution. The outcome was aided speech recognition in noise. RESULTS The two hearing aid fittings resulted in different amounts of signal modification, with significantly less modification for the mild signal processing fitting. The effect of signal processing on speech intelligibility depended on an individual's age, working memory capacity, and degree of hearing loss. Speech recognition with the strong signal processing decreased with increasing age. Working memory interacted with signal processing, with individuals with lower working memory demonstrating low speech intelligibility in noise with both processing conditions, and individuals with higher working memory demonstrating better speech intelligibility in noise with the mild signal processing fitting. Amount of hearing loss interacted with signal processing, but the effects were small. Individual spectral and temporal resolution did not contribute significantly to the variance in the speech intelligibility score. CONCLUSIONS When the consequences of a specific set of hearing aid signal processing characteristics were quantified in terms of overall signal modification, there was a relationship between participant characteristics and recognition of speech at different levels of signal modification. Because the hearing aid fittings used were constrained to specific fitting parameters that represent the extremes of the signal modification that might occur in clinical fittings, future work should focus on similar relationships with more diverse types of signal processing parameters.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, Evanston, Illinois, USA
| | - Kathryn Arehart
- Department of Speech Language Hearing Sciences, University of Colorado at Boulder
| | - Tim Schoof
- Department of Speech, Hearing and Phonetic Sciences, Division of Psychology and Language Sciences, University College London
| | - Melinda Anderson
- Department of Otolaryngology, University of Colorado School of Medicine
| | - Dorina Strori
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, USA
- Department of Linguistics, Northwestern University, Evanston, Illinois, USA
| | - Lauren Balmert
- Department of Preventive Medicine, Biostatistics Collaboration Center, Feinberg School of Medicine, Northwestern University
| |
Collapse
|
56
|
Yumba WK. Selected Cognitive Factors Associated with Individual Variability in Clinical Measures of Speech Recognition in Noise Amplified by Fast-Acting Compression Among Hearing Aid Users. Noise Health 2020; 21:7-16. [PMID: 32098926 PMCID: PMC7050232 DOI: 10.4103/nah.nah_59_18] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022] Open
Abstract
Objective: Previous work examining speech recognition in more challenging listening environments has revealed a large variability in both persons with normal and hearing impairments. Although this is clinically very important, up to now, no consensus has been reached about which factors may provide better explanation for the existing individual variability in speech recognition ability among hearing aid users, when speech signal is degraded. This study aimed to examine hearing-sensitivity skills and cognitive ability differences between listeners with good and poor speech recognition abilities. Materials and Methods: A total of 195 experienced hearing aid users (33–80 years) were grouped by higher or lower speech recognition ability based on their performance on the Hagerman sentences task in multi-talker babble using fast-acting compression algorithm. They completed a battery of cognitive abilities tests, hearing-in-noise and the auditory thresholds test. Results: The results showed that the two groups did differ significantly overall on cognitive abilities tests like working memory, cognitive processing speed and attentional shifting, but not on the attentional inhibitory test and non-verbal intelligence test. Conclusions: Listeners with poor compared to those with better speech recognition abilities exhibit poorer cognitive abilities, which place them in a disadvantaged position, and /or more susceptible to signal modifications (as a result of fast-acting compression signal processing), resulting in limited benefits from hearing aids strategies. The findings may have implications for hearing aid signal processing strategies selection in rehabilitations.
Collapse
Affiliation(s)
- Wycliffe K Yumba
- Department of Behavioral Sciences and Learning, Linköping University, Linköping; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
57
|
Strand JF, Ray L, Dillman-Hasso NH, Villanueva J, Brown VA. Understanding Speech Amid the Jingle and Jangle: Recommendations for Improving Measurement Practices in Listening Effort Research. ACTA ACUST UNITED AC 2020; 3:169-188. [PMID: 34240011 DOI: 10.1080/25742442.2021.1903293] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The latent constructs psychologists study are typically not directly accessible, so researchers must design measurement instruments that are intended to provide insights about those constructs. Construct validation-assessing whether instruments measure what they intend to-is therefore critical for ensuring that the conclusions we draw actually reflect the intended phenomena. Insufficient construct validation can lead to the jingle fallacy-falsely assuming two instruments measure the same construct because the instruments share a name (Thorndike, 1904)-and the jangle fallacy-falsely assuming two instruments measure different constructs because the instruments have different names (Kelley, 1927). In this paper, we examine construct validation practices in research on listening effort and identify patterns that strongly suggest the presence of jingle and jangle in the literature. We argue that the lack of construct validation for listening effort measures has led to inconsistent findings and hindered our understanding of the construct. We also provide specific recommendations for improving construct validation of listening effort instruments, drawing on the framework laid out in a recent paper on improving measurement practices (Flake & Fried, 2020). Although this paper addresses listening effort, the issues raised and recommendations presented are widely applicable to tasks used in research on auditory perception and cognitive psychology.
Collapse
Affiliation(s)
| | - Lucia Ray
- Carleton College, Department of Psychology
| | | | | | - Violet A Brown
- Washington University in St. Louis, Department of Psychological & Brain Sciences
| |
Collapse
|
58
|
Guijo LM, Horiuti MB, Cardoso ACV. Validação de conteúdo de um instrumento para mensuração do esforço auditivo. Codas 2020; 32:e20180272. [DOI: 10.1590/2317-1782/20202018272] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2018] [Accepted: 09/30/2019] [Indexed: 11/22/2022] Open
Abstract
RESUMO Objetivo: Validar o conteúdo de um instrumento para mensuração do esforço auditivo para indivíduos com perda auditiva. Método: Trata-se de um estudo de validação, desenvolvido em duas fases, sendo a fase 1 o planejamento e desenvolvimento da primeira versão do instrumento e a fase 2 a investigação das evidências de validade baseadas no conteúdo do instrumento e desenvolvimento da versão final para mensuração de esforço auditivo. Participaram dez profissionais com expertise na área audiológica, com mais de cinco anos de experiência. O instrumento a ser validado foi composto por três partes: I - “percepção de fala de logatomas e esforço auditivo”; II - “esforço auditivo e memória operacional”; e III - “percepção de sentenças sem sentido e memória operacional”, apresentadas de forma monoaural no silêncio e nas relações sinal-ruído +5dB, 0dB e -5dB. Foi realizada a análise descritiva das sugestões do comitê de fonoaudiólogos e do índice de validade de conteúdo individual e total. Resultados: Os resultados mostraram que as partes I e III do instrumento proposto atingiram o índice de validade de conteúdo total acima de 0,78, ou seja, os itens apresentados não necessitaram de modificações em seu constructo. Conclusão: As evidências de validade estudadas permitiram relevantes modificações e tornaram esse instrumento adequado ao seu constructo.
Collapse
|
59
|
Taitelbaum-Swead R, Kozol Z, Fostick L. Listening Effort Among Adults With and Without Attention-Deficit/Hyperactivity Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4554-4563. [PMID: 31747524 DOI: 10.1044/2019_jslhr-h-19-0134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Few studies have assessed listening effort (LE)-the cognitive resources required to perceive speech-among populations with intact hearing but reduced availability of cognitive resources. Attention/deficit/hyperactivity disorder (ADHD) is theorized to restrict attention span, possibly making speech perception in adverse conditions more challenging. This study examined the effect of ADHD on LE among adults using a behavioral dual-task paradigm (DTP). Method Thirty-nine normal-hearing adults (aged 21-27 years) participated: 19 with ADHD (ADHD group) and 20 without ADHD (control group). Baseline group differences were measured in visual and auditory attention as well as speech perception. LE using DTP was assessed as the performance difference on a visual-motor task versus a simultaneous auditory and visual-motor task. Results Group differences in attention were confirmed by differences in visual attention (larger reaction times between congruent and incongruent conditions) and auditory attention (lower accuracy in the presence of distractors) among the ADHD group, compared to the controls. LE was greater among the ADHD group than the control group. Nevertheless, no group differences were found in speech perception. Conclusions LE is increased among those with ADHD. As a DTP assumes limited cognitive capacity to allocate attentional resources, LE among those with ADHD may be increased because higher level cognitive processes are more taxed in this population. Studies on LE using a DTP should take into consideration mechanisms of selective and divided attention. Among young adults who need to continuously process great volumes of auditory and visual information, much more effort may be expended by those with ADHD than those without it. As a result, those with ADHD may be more prone to fatigue and irritability, similar to those who are engaged in more outwardly demanding tasks.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Zvi Kozol
- Department of Physiotherapy, Ariel University, Israel
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
60
|
Macpherson EA, Curca IA, Scollie S, Parsa V, Vansevenant K, Zimmerman K, Lewis-Teeter J, Allen P, Parnes L, Agrawal S. Effects of Bimodal and Bilateral Cochlear Implant Use on a Nonauditory Working Memory Task: Reading Span Tests Over 2 Years Following Cochlear Implantation. Am J Audiol 2019; 28:947-963. [PMID: 31829722 DOI: 10.1044/2019_aja-19-0030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose A growing body of evidence indicates that treatment of hearing loss by provision of hearing aids leads to improvements in auditory and visual working memory. The purpose of this study was to assess whether similar working memory benefits are observed following provision of cochlear implants (CIs). Method Fifteen adults with postlingually acquired severe bilateral sensorineural hearing loss completed the prospective longitudinal study. Participants were candidates for bilateral cochlear implantation with some aidable hearing in each ear. Implantation surgeries were carried out sequentially, approximately 1 year apart. Working memory was measured with the visual Reading Span Test (Daneman & Carpenter, 1980) at 5 time points: pre-operatively following a 6-month bilateral hearing aid trial, after 6 and 12 months of bimodal (CI plus contralateral hearing aid) listening experience following the 1st CI surgery and activation, and again after 6 and 12 months of bilateral CI listening experience following the 2nd CI surgery and activation. Results Compared to the preoperative baseline, CI listening experience yielded significant improvements in participants' ability to recall test words in the correct serial order after 12 months in the bimodal condition. Individual performance outcomes were variable, but almost all participants showed increases in task performance over the course of the study. Conclusions These results suggest that, similar to appropriate interventions with hearing aids, treatment of hearing loss with CIs can yield working memory benefits. A likely mechanism is the freeing of cognitive resources previously devoted to effortful listening.
Collapse
Affiliation(s)
- Ewan A. Macpherson
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Ioan A. Curca
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Susan Scollie
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Vijay Parsa
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- National Centre for Audiology, Western University, London, Ontario, Canada
| | | | - Kim Zimmerman
- Cochlear Implant Program, London Health Sciences Centre, Ontario, Canada
| | - Jamie Lewis-Teeter
- Cochlear Implant Program, London Health Sciences Centre, Ontario, Canada
| | - Prudence Allen
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
- National Centre for Audiology, Western University, London, Ontario, Canada
| | - Lorne Parnes
- Cochlear Implant Program, London Health Sciences Centre, Ontario, Canada
- Department of Otolaryngology—Head and Neck Surgery, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| | - Sumit Agrawal
- Cochlear Implant Program, London Health Sciences Centre, Ontario, Canada
- Department of Otolaryngology—Head and Neck Surgery, Schulich School of Medicine & Dentistry, Western University, London, Ontario, Canada
| |
Collapse
|
61
|
Keerstock S, Smiljanic R. Clear speech improves listeners' recall. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:4604. [PMID: 31893679 DOI: 10.1121/1.5141372] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2019] [Accepted: 12/02/2019] [Indexed: 05/24/2023]
Abstract
The present study examined the effect of intelligibility-enhancing clear speech on listeners' recall. Native (n = 57) and non-native (n = 31) English listeners heard meaningful sentences produced in clear and conversational speech, and then completed a cued-recall task. Results showed that listeners recalled more words from clearly produced sentences. Sentence-level analysis revealed that listening to clear speech increased the odds of recalling whole sentences and decreased the odds of erroneous and omitted responses. This study showed that the clear speech benefit extends beyond word- and sentence-level recognition memory to include deeper linguistic encoding at the level of syntactic and semantic information.
Collapse
Affiliation(s)
- Sandie Keerstock
- Department of Linguistics, University of Texas at Austin, 305 East 23rd Street STOP B5100, Austin, Texas 78712, USA
| | - Rajka Smiljanic
- Department of Linguistics, University of Texas at Austin, 305 East 23rd Street STOP B5100, Austin, Texas 78712, USA
| |
Collapse
|
62
|
Abstract
It is widely accepted that seeing a talker improves a listener's ability to understand what a talker is saying in background noise (e.g., Erber, 1969; Sumby & Pollack, 1954). The literature is mixed, however, regarding the influence of the visual modality on the listening effort required to recognize speech (e.g., Fraser, Gagné, Alepins, & Dubois, 2010; Sommers & Phelps, 2016). Here, we present data showing that even when the visual modality robustly benefits recognition, processing audiovisual speech can still result in greater cognitive load than processing speech in the auditory modality alone. We show using a dual-task paradigm that the costs associated with audiovisual speech processing are more pronounced in easy listening conditions, in which speech can be recognized at high rates in the auditory modality alone-indeed, effort did not differ between audiovisual and audio-only conditions when the background noise was presented at a more difficult level. Further, we show that though these effects replicate with different stimuli and participants, they do not emerge when effort is assessed with a recall paradigm rather than a dual-task paradigm. Together, these results suggest that the widely cited audiovisual recognition benefit may come at a cost under more favorable listening conditions, and add to the growing body of research suggesting that various measures of effort may not be tapping into the same underlying construct (Strand et al., 2018).
Collapse
|
63
|
Basavanahalli Jagadeesh A, Kumar U A. Effect of informational masking on auditory working memory: role of linguistic information in the maskers. HEARING, BALANCE AND COMMUNICATION 2019. [DOI: 10.1080/21695717.2019.1630980] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Anoop Basavanahalli Jagadeesh
- Facility for Advanced Auditory Research (FAAR), Department of Audiology, All India Institute of Speech and Hearing, Mysuru, India
| | - Ajith Kumar U
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru, India
| |
Collapse
|
64
|
Megha, Maruthy S. Auditory and Cognitive Attributes of Hearing Aid Acclimatization in Individuals With Sensorineural Hearing Loss. Am J Audiol 2019; 28:460-470. [PMID: 31461327 DOI: 10.1044/2018_aja-ind50-18-0100] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The study aimed to investigate the underlying mechanisms of perceived benefit of hearing aid acclimatization. Specifically, measures in the auditory and cognitive domain were tapped to investigate its relationship with the perceived benefit. Method Twenty-six individuals with sensorineural hearing loss served as participants for the study. The perceived benefit of hearing aid use was assessed using the Speech, Spatial and Qualities of Hearing Scale (SSQ; Gatehouse & Noble, 2004). Signal-to-noise ratio-50 (SNR-50) and acceptable noise levels were the measures in auditory domain, whereas working memory and listening effort (LE) were the measures in cognitive domain. All the measures were tracked over a span of 2 months of hearing aid use to determine the benefits of hearing aid acclimatization. Results The SSQ showed improvements from baseline to 2nd month of hearing aid use. The mean improvement in the SNR-50 scores was 3.19 dB from the baseline. Acceptable noise levels and working memory did not change with hearing aid use. LE showed improvements in quiet but not in noise. The improvements in the SSQ were found to relate with the improvements in SNR-50. Conclusions The study indicated a significant perceived benefit with hearing aid acclimatization, and the underlying mechanism appears to be the signal-to-noise ratio gain. The findings of LE indicated reduced LE, thereby suggesting lesser cognitive load with hearing aid acclimatization. In addition, the individuals who performed poorer in the baseline measurement showed greater perceived benefit with hearing aid acclimatization. Supplemental Material https://doi.org/10.23641/asha.9253175.
Collapse
Affiliation(s)
- Megha
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangothri, Mysuru, Karnataka, India
| | - Sandeep Maruthy
- Department of Audiology, All India Institute of Speech and Hearing,Manasagangothri, Mysuru, Karnataka, India
| |
Collapse
|
65
|
Moradi S, Lidestam B, Ning Ng EH, Danielsson H, Rönnberg J. Perceptual Doping: An Audiovisual Facilitation Effect on Auditory Speech Processing, From Phonetic Feature Extraction to Sentence Identification in Noise. Ear Hear 2019; 40:312-327. [PMID: 29870521 PMCID: PMC6400397 DOI: 10.1097/aud.0000000000000616] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 04/15/2018] [Indexed: 11/25/2022]
Abstract
OBJECTIVE We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. DESIGN The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. RESULTS In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. CONCLUSIONS The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
- Oticon A/S, Smørum, Denmark
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
66
|
Francis AL, Love J. Listening effort: Are we measuring cognition or affect, or both? WILEY INTERDISCIPLINARY REVIEWS. COGNITIVE SCIENCE 2019; 11:e1514. [PMID: 31381275 DOI: 10.1002/wcs.1514] [Citation(s) in RCA: 46] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2019] [Revised: 07/07/2019] [Accepted: 07/10/2019] [Indexed: 12/14/2022]
Abstract
Listening effort is increasingly recognized as a factor in communication, particularly for and with nonnative speakers, for the elderly, for individuals with hearing impairment and/or for those working in noise. However, as highlighted by McGarrigle et al., International Journal of Audiology, 2014, 53, 433-445, the term "listening effort" encompasses a wide variety of concepts, including the engagement and control of multiple possibly distinct neural systems for information processing, and the affective response to the expenditure of those resources in a given context. Thus, experimental or clinical methods intended to objectively quantify listening effort may ultimately reflect a complex interaction between the operations of one or more of those information processing systems, and/or the affective and motivational response to the demand on those systems. Here we examine theoretical, behavioral, and psychophysiological factors related to resolving the question of what we are measuring, and why, when we measure "listening effort." This article is categorized under: Linguistics > Language in Mind and Brain Psychology > Theory and Methods Psychology > Attention Psychology > Emotion and Motivation.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| | - Jordan Love
- Department of Speech, Language and Hearing Sciences, Purdue University, West Lafayette, Indiana
| |
Collapse
|
67
|
Speech Perception in Noise and Listening Effort of Older Adults With Nonlinear Frequency Compression Hearing Aids. Ear Hear 2019; 39:215-225. [PMID: 28806193 DOI: 10.1097/aud.0000000000000481] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this laboratory-based study was to compare the efficacy of two hearing aid fittings with and without nonlinear frequency compression, implemented within commercially available hearing aids. Previous research regarding the utility of nonlinear frequency compression has revealed conflicting results for speech recognition, marked by high individual variability. Individual differences in auditory function and cognitive abilities, specifically hearing loss slope and working memory, may contribute to aided performance. The first aim of the study was to determine the effect of nonlinear frequency compression on aided speech recognition in noise and listening effort using a dual-task test paradigm. The hypothesis, based on the Ease of Language Understanding model, was that nonlinear frequency compression would improve speech recognition in noise and decrease listening effort. The second aim of the study was to determine if listener variables of hearing loss slope, working memory capacity, and age would predict performance with nonlinear frequency compression. DESIGN A total of 17 adults (age, 57-85 years) with symmetrical sensorineural hearing loss were tested in the sound field using hearing aids fit to target (NAL-NL2). Participants were recruited with a range of hearing loss severities and slopes. A within-subjects, single-blinded design was used to compare performance with and without nonlinear frequency compression. Speech recognition in noise and listening effort were measured by adapting the Revised Speech in Noise Test into a dual-task paradigm. Participants were required trial-by-trial to repeat the last word of each sentence presented in speech babble and then recall the sentence-ending words after every block of six sentences. Half of the sentences were rich in context for the recognition of the final word of each sentence, and half were neutral in context. Extrinsic factors of sentence context and nonlinear frequency compression were manipulated, and intrinsic factors of hearing loss slope, working memory capacity, and age were measured to determine which participant factors were associated with benefit from nonlinear frequency compression. RESULTS On average, speech recognition in noise performance significantly improved with the use of nonlinear frequency compression. Individuals with steeply sloping hearing loss received more recognition benefit. Recall performance also significantly improved at the group level, with nonlinear frequency compression revealing reduced listening effort. The older participants within the study cohort received less recall benefit than the younger participants. The benefits of nonlinear frequency compression for speech recognition and listening effort did not correlate with each other, suggesting separable sources of benefit for these outcome measures. CONCLUSIONS Improvements of speech recognition in noise and reduced listening effort indicate that adult hearing aid users can receive benefit from nonlinear frequency compression in a noisy environment, with the amount of benefit varying across individuals and across outcome measures. Evidence supports individualized selection of nonlinear frequency compression, with results suggesting benefits in speech recognition for individuals with steeply sloping hearing losses and in listening effort for younger individuals. Future research is indicated with a larger data set on the dual-task paradigm as a potential cognitive outcome measure.
Collapse
|
68
|
Huber R, Rählmann S, Bisitz T, Meis M, Steinhauser S, Meister H. Influence of working memory and attention on sound-quality ratings. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:1283. [PMID: 31067927 DOI: 10.1121/1.5092808] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 02/14/2019] [Indexed: 06/09/2023]
Abstract
This study investigated the potential influence of cognitive factors on subjective sound-quality ratings. To this end, 34 older subjects (ages 61-79) with near-normal hearing thresholds rated the perceived sound quality of speech and music stimuli that had been distorted by linear filtering, non-linear processing, and multiband dynamic compression. In addition, all subjects performed the Reading Span Test (RST) to assess working memory capacity (WMC), and the test d2-R (a visual test of letter and symbol identification) was used to assess the subjects' selective and sustained attention. The quality-rating scores, which reflected the susceptibility to signal distortions, were characterized by large interindividual variances. Linear mixed modelling with age, high-frequency pure tone threshold, RST, and d2-R results as independent variables showed that individual speech-quality ratings were significantly related to age and attention. Music-quality ratings were significantly related to WMC. Taking these factors into account might lead to improved sound-quality prediction models. Future studies should, however, address the question of whether these effects are due to procedural mechanisms or actually do show that cognitive abilities mediate sensitivity to sound-quality modifications.
Collapse
Affiliation(s)
- Rainer Huber
- HörTech gGmbH and Cluster of Excellence Hearing4All, Marie-Curie-Straße 2, 26129 Oldenburg, Germany
| | - Sebastian Rählmann
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Geibelstraße 29-31, 50931 Cologne, Germany
| | - Thomas Bisitz
- HörTech gGmbH and Cluster of Excellence Hearing4All, Marie-Curie-Straße 2, 26129 Oldenburg, Germany
| | - Markus Meis
- Hörzentrum Oldenburg GmbH and Cluster of Excellence Hearing4All, Marie-Curie-Straße 2, 26129 Oldenburg, Germany
| | - Susanne Steinhauser
- Institute of Medical Statistics and Computational Biology, University Hospital of Cologne, Cologne, Germany
| | - Hartmut Meister
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Geibelstraße 29-31, 50931 Cologne, Germany
| |
Collapse
|
69
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
70
|
Guijo LM, Horiuti MB, Nardez TMB, Cardoso ACV. Listening effort and working memory capacity in hearing impaired individuals: an integrative literature review. REVISTA CEFAC 2018. [DOI: 10.1590/1982-021620182066618] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
ABSTRACT Purpose: to review the literature on the behavioral methods of listening effort assessment and the working memory capacity recommended for the hearing impaired. Methods: this review was developed through the search of articles in national and international journals, in English and Portuguese, available in Pubmed/Medline, Cochrane Library, Biblioteca Vitual em Saúde - Literatura Latino-Americana e do Caribe em Ciências da Saúde (LILACS) and Scientific Electronic Library Online, between 2007 and 2017. The articles were selected based on the inclusion criteria: articles that used behavioral methods to assess listening effort in hearing-impaired adults, involving the measurement of working memory and its relationship with the listening effort, published in the last 10 years. Results: Twelve articles in which behavioral measures were used to measure listening effort and working memory capacity in the hearing-impaired individuals were reviewed. Their main findings refer to the purpose(s) of the research, participants, behavioral method composed of a primary task (speech perception) and a secondary task (memorization) and results of the studies. Conclusion:the findings of this review allow us to infer that this paradigm is sensitive to measure the listening effort, considering the different instruments used and the population assessed.
Collapse
|
71
|
Di Stadio A, Dipietro L, Toffano R, Burgio F, De Lucia A, Ippolito V, Garofalo S, Ricci G, Martines F, Trabalzini F, Della Volpe A. Working Memory Function in Children with Single Side Deafness Using a Bone-Anchored Hearing Implant: A Case-Control Study. Audiol Neurootol 2018; 23:238-244. [PMID: 30439708 DOI: 10.1159/000493722] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 09/10/2018] [Indexed: 11/19/2022] Open
Abstract
The importance of a good hearing function to preserve memory and cognitive abilities has been shown in the adult population, but studies on the pediatric population are currently lacking. This study aims at evaluating the effects of a bone-anchored hearing implant (BAHI) on speech perception, speech processing, and memory abilities in children with single side deafness (SSD). We enrolled n = 25 children with SSD and assessed them prior to BAHI implantation, and at 1-month and 3-month follow-ups after BAHI implantation using tests of perception in silence and perception in phonemic confusion, dictation in silence and noise, and working memory and short-term memory function in conditions of silence and noise. We also enrolled and evaluated n = 15 children with normal hearing. We found a statistically significant difference in performance between healthy children and children with SSD before BAHI implantation in the scores of all tests. After 3 months from BAHI implantation, the per-formance of children with SSD was comparable to that of healthy subjects as assessed by tests of speech perception, working memory, and short-term memory function in silence condition, while differences persisted in the scores of the dictation test (both in silence and noise conditions) and of the working memory function test in noise condition. Our data suggest that in children with SSD BAHI improves speech perception and memory. Speech rehabilitation may be necessary to further improve speech processing.
Collapse
Affiliation(s)
- Arianna Di Stadio
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy,
| | | | - Roberta Toffano
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy
| | - Francesca Burgio
- Neurology and Neuropsychology Unit, IRCCS, San Camillo Hospital, Venice, Italy
| | - Antonietta De Lucia
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Valentina Ippolito
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Sabina Garofalo
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| | - Giampietro Ricci
- Otolaryngology Department, University of Perugia, Perugia, Italy
| | | | | | - Antonio Della Volpe
- Cochlear Implant Unit, Children Hospital Santobono-Pausilipon, Naples, Italy
| |
Collapse
|
72
|
Meister H, Rählmann S, Walger M. Low background noise increases cognitive load in older adults listening to competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL417. [PMID: 30522293 DOI: 10.1121/1.5078953] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 10/28/2018] [Indexed: 06/09/2023]
Abstract
This letter describes a dual-task paradigm sensitive to noise masking at favorable signal-to-noise ratios (SNRs). Two competing sentences differing in voice and context cues were presented against noise at SNRs of +2 and +6 dB. Listeners were asked to repeat back words from both competing sentences while prioritizing one of them. Recognition of the high-priority sentences was high and did not depend on the SNR. In contrast, recognition of the low-priority sentences was low and showed a significant SNR effect that was related to the listener's working memory capacity. This suggests that even subtle noise masking causes cognitive load in competing-talker situations.
Collapse
Affiliation(s)
- Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Geibelstrasse 29-31, D-50931 Cologne, Germany ,
| | - Sebastian Rählmann
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Geibelstrasse 29-31, D-50931 Cologne, Germany ,
| | - Martin Walger
- Clinic of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Kerpenerstr. 62, 50924 Cologne, Germany
| |
Collapse
|
73
|
Social Connectedness and Perceived Listening Effort in Adult Cochlear Implant Users: A Grounded Theory to Establish Content Validity for a New Patient-Reported Outcome Measure. Ear Hear 2018; 39:922-934. [DOI: 10.1097/aud.0000000000000553] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
74
|
The effect of reward on listening effort as reflected by the pupil dilation response. Hear Res 2018; 367:106-112. [DOI: 10.1016/j.heares.2018.07.011] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/02/2018] [Revised: 07/19/2018] [Accepted: 07/25/2018] [Indexed: 11/22/2022]
|
75
|
Strand JF, Brown VA, Merchant MB, Brown HE, Smith J. Measuring Listening Effort: Convergent Validity, Sensitivity, and Links With Cognitive and Personality Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1463-1486. [PMID: 29800081 DOI: 10.1044/2018_jslhr-h-17-0257] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/07/2017] [Accepted: 02/06/2018] [Indexed: 06/08/2023]
Abstract
PURPOSE Listening effort (LE) describes the attentional or cognitive requirements for successful listening. Despite substantial theoretical and clinical interest in LE, inconsistent operationalization makes it difficult to make generalizations across studies. The aims of this large-scale validation study were to evaluate the convergent validity and sensitivity of commonly used measures of LE and assess how scores on those tasks relate to cognitive and personality variables. METHOD Young adults with normal hearing (N = 111) completed 7 tasks designed to measure LE, 5 tests of cognitive ability, and 2 personality measures. RESULTS Scores on some behavioral LE tasks were moderately intercorrelated but were generally not correlated with subjective and physiological measures of LE, suggesting that these tasks may not be tapping into the same underlying construct. LE measures differed in their sensitivity to changes in signal-to-noise ratio and the extent to which they correlated with cognitive and personality variables. CONCLUSIONS Given that LE measures do not show consistent, strong intercorrelations and differ in their relationships with cognitive and personality predictors, these findings suggest caution in generalizing across studies that use different measures of LE. The results also indicate that people with greater cognitive ability appear to use their resources more efficiently, thereby diminishing the detrimental effects associated with increased background noise during language processing.
Collapse
Affiliation(s)
- Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN
| | - Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN
| | | | - Hunter E Brown
- Department of Psychology, Carleton College, Northfield, MN
| | - Julia Smith
- Department of Psychology, Carleton College, Northfield, MN
| |
Collapse
|
76
|
Ahmadi R, Jalilvand H, Mahdavi ME, Ahmadi F, Baghban ARA. The Effects of Hearing Aid Digital Noise Reduction and Directionality on Acceptable Noise Level. Clin Exp Otorhinolaryngol 2018; 11:267-274. [PMID: 29902915 PMCID: PMC6222189 DOI: 10.21053/ceo.2018.00052] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2018] [Accepted: 04/17/2018] [Indexed: 11/22/2022] Open
Abstract
OBJECTIVES Two main digital signal processing technologies inside the modern hearing aid to provide the best conditions for hearing aid users are directionality (DIR) and digital noise reduction (DNR) algorithms. There are various possible settings for these algorithms. The present study evaluates the effects of various DIR and DNR conditions (both separately and in combination) on listening comfort among hearing aid users. METHODS In 18 participants who received hearing aid fitting services from the Rehabilitation School of Shahid Beheshti University of Medical Sciences regularly, we applied acceptable noise level (ANL) as our subjective measure of listening comfort. We evaluated both of these under six different hearing aid conditions: omnidirectional-baseline, omnidirectional-broadband DNR, omnidirectional-multichannel DNR, directional, directional-broadband DNR, and directional-multichannel DNR. RESULTS The ANL results ranged from -3 dB to 14 dB in all conditions. The results show, among all conditions, both the omnidirectional-baseline condition and the omnidirectional-broadband DNR condition are the worst conditions for listening in noise. The DIR always reduces the amount of noise that patients received during testing. The DNR algorithm does not improve listening in noise significantly when compared with the DIR algorithms. Although both DNR and DIR algorithms yielded a lower ANL, the DIR algorithm was more effective than the DNR. CONCLUSION The DIR and DNR technologies provide listening comfort in the presence of noise. Thus, user benefit depends on how the digital signal processing settings inside the hearing aid are adjusted.
Collapse
Affiliation(s)
- Roghayeh Ahmadi
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Hamid Jalilvand
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Mohammad Ebrahim Mahdavi
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Fatemeh Ahmadi
- School of Economic, Allameh Tabataba'i University, Tehran, Iran
| | - Ali Reza Akbarzade Baghban
- Department of Audiology, School of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
77
|
Ohlenforst B, Wendt D, Kramer SE, Naylor G, Zekveld AA, Lunner T. Impact of SNR, masker type and noise reduction processing on sentence recognition performance and listening effort as indicated by the pupil dilation response. Hear Res 2018; 365:90-99. [PMID: 29779607 DOI: 10.1016/j.heares.2018.05.003] [Citation(s) in RCA: 56] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Revised: 05/01/2018] [Accepted: 05/03/2018] [Indexed: 11/26/2022]
Abstract
Recent studies have shown that activating the noise reduction scheme in hearing aids results in a smaller peak pupil dilation (PPD), indicating reduced listening effort, at 50% and 95% correct sentence recognition with a 4-talker masker. The objective of this study was to measure the effect of the noise reduction scheme (on or off) on PPD and sentence recognition across a wide range of signal-to-noise ratios (SNRs) from +16 dB to -12 dB and two masker types (4-talker and stationary noise). Relatively low PPDs were observed at very low (-12 dB) and very high (+16 dB to +8 dB) SNRs presumably due to 'giving up' and 'easy listening', respectively. The maximum PPD was observed with SNRs at approximately 50% correct sentence recognition. Sentence recognition with both masker types was significantly improved by the noise reduction scheme, which corresponds to the shift in performance from SNR function at approximately 5 dB toward a lower SNR. This intelligibility effect was accompanied by a corresponding effect on the PPD, shifting the peak by approximately 4 dB toward a lower SNR. In addition, with the 4-talker masker, when the noise reduction scheme was active, the PPD was smaller overall than that when the scheme was inactive. We conclude that with the 4-talker masker, noise reduction scheme processing provides a listening effort benefit in addition to any effect associated with improved intelligibility. Thus, the effect of the noise reduction scheme on listening effort incorporates more than can be explained by intelligibility alone, emphasizing the potential importance of measuring listening effort in addition to traditional speech reception measures.
Collapse
Affiliation(s)
- Barbara Ohlenforst
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, The Netherlands; Eriksholm Research Center, Oticon A/S, Denmark.
| | - Dorothea Wendt
- Eriksholm Research Center, Oticon A/S, Denmark; Department of Electrical Engineering, Technical University of Denmark, Denmark
| | - Sophia E Kramer
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, The Netherlands
| | - Graham Naylor
- MRC/CSO Institute of Hearing Research, Scottish Section, Glasgow, United Kingdom, Part of the University of Nottingham
| | - Adriana A Zekveld
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery, VU University Medical Center and Amsterdam Public Health Research Institute, Amsterdam, The Netherlands; Department of Behavioral Sciences and Learning, Linköping University, Sweden; Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Sweden
| | - Thomas Lunner
- Department of Behavioral Sciences and Learning, Linköping University, Sweden; Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping and Örebro Universities, Sweden; Eriksholm Research Center, Oticon A/S, Denmark; Department of Electrical Engineering, Technical University of Denmark, Denmark
| |
Collapse
|
78
|
Van Engen KJ, McLaughlin DJ. Eyes and ears: Using eye tracking and pupillometry to understand challenges to speech recognition. Hear Res 2018; 369:56-66. [PMID: 29801981 DOI: 10.1016/j.heares.2018.04.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/03/2017] [Revised: 04/12/2018] [Accepted: 04/25/2018] [Indexed: 11/16/2022]
Abstract
Although human speech recognition is often experienced as relatively effortless, a number of common challenges can render the task more difficult. Such challenges may originate in talkers (e.g., unfamiliar accents, varying speech styles), the environment (e.g. noise), or in listeners themselves (e.g., hearing loss, aging, different native language backgrounds). Each of these challenges can reduce the intelligibility of spoken language, but even when intelligibility remains high, they can place greater processing demands on listeners. Noisy conditions, for example, can lead to poorer recall for speech, even when it has been correctly understood. Speech intelligibility measures, memory tasks, and subjective reports of listener difficulty all provide critical information about the effects of such challenges on speech recognition. Eye tracking and pupillometry complement these methods by providing objective physiological measures of online cognitive processing during listening. Eye tracking records the moment-to-moment direction of listeners' visual attention, which is closely time-locked to unfolding speech signals, and pupillometry measures the moment-to-moment size of listeners' pupils, which dilate in response to increased cognitive load. In this paper, we review the uses of these two methods for studying challenges to speech recognition.
Collapse
|
79
|
van den Tillaart-Haverkate M, de Ronde-Brons I, Dreschler WA, Houben R. The Influence of Noise Reduction on Speech Intelligibility, Response Times to Speech, and Perceived Listening Effort in Normal-Hearing Listeners. Trends Hear 2018; 21:2331216517716844. [PMID: 28656807 PMCID: PMC5495507 DOI: 10.1177/2331216517716844] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Single-microphone noise reduction leads to subjective benefit, but not to objective improvements in speech intelligibility. We investigated whether response times (RTs) provide an objective measure of the benefit of noise reduction and whether the effect of noise reduction is reflected in rated listening effort. Twelve normal-hearing participants listened to digit triplets that were either unprocessed or processed with one of two noise-reduction algorithms: an ideal binary mask (IBM) and a more realistic minimum mean square error estimator (MMSE). For each of these three processing conditions, we measured (a) speech intelligibility, (b) RTs on two different tasks (identification of the last digit and arithmetic summation of the first and last digit), and (c) subjective listening effort ratings. All measurements were performed at four signal-to-noise ratios (SNRs): −5, 0, +5, and +∞ dB. Speech intelligibility was high (>97% correct) for all conditions. A significant decrease in response time, relative to the unprocessed condition, was found for both IBM and MMSE for the arithmetic but not the identification task. Listening effort ratings were significantly lower for IBM than for MMSE and unprocessed speech in noise. We conclude that RT for an arithmetic task can provide an objective measure of the benefit of noise reduction. For young normal-hearing listeners, both ideal and realistic noise reduction can reduce RTs at SNRs where speech intelligibility is close to 100%. Ideal noise reduction can also reduce perceived listening effort.
Collapse
Affiliation(s)
- Maj van den Tillaart-Haverkate
- 1 Clinical and Experimental Audiology, Academic Medical Center, Amsterdam, The Netherlands.,2 Pento Audiological Center, Amersfoort, The Netherlands
| | - Inge de Ronde-Brons
- 1 Clinical and Experimental Audiology, Academic Medical Center, Amsterdam, The Netherlands
| | - Wouter A Dreschler
- 1 Clinical and Experimental Audiology, Academic Medical Center, Amsterdam, The Netherlands
| | - Rolph Houben
- 1 Clinical and Experimental Audiology, Academic Medical Center, Amsterdam, The Netherlands.,2 Pento Audiological Center, Amersfoort, The Netherlands
| |
Collapse
|
80
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
81
|
Zekveld AA, Pronk M, Danielsson H, Rönnberg J. Reading Behind the Lines: The Factors Affecting the Text Reception Threshold in Hearing Aid Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:762-775. [PMID: 29450534 DOI: 10.1044/2017_jslhr-h-17-0196] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 10/12/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The visual Text Reception Threshold (TRT) test (Zekveld et al., 2007) has been designed to assess modality-general factors relevant for speech perception in noise. In the last decade, the test has been adopted in audiology labs worldwide. The 1st aim of this study was to examine which factors best predict interindividual differences in the TRT. Second, we aimed to assess the relationships between the TRT and the speech reception thresholds (SRTs) estimated in various conditions. METHOD First, we reviewed studies reporting relationships between the TRT and the auditory and/or cognitive factors and formulated specific hypotheses regarding the TRT predictors. These hypotheses were tested using a prediction model applied to a rich data set of 180 hearing aid users. In separate association models, we tested the relationships between the TRT and the various SRTs and subjective hearing difficulties, while taking into account potential confounding variables. RESULTS The results of the prediction model indicate that the TRT is predicted by the ability to fill in missing words in incomplete sentences, by lexical access speed, and by working memory capacity. Furthermore, in line with previous studies, a moderate association between higher age, poorer pure-tone hearing acuity, and poorer TRTs was observed. Better TRTs were associated with better SRTs for the correct perception of 50% of Hagerman matrix sentences in a 4-talker babble, as well as with better subjective ratings of speech perception. Age and pure-tone hearing thresholds significantly confounded these associations. The associations of the TRT with SRTs estimated in other conditions and with subjective qualities of hearing were not statistically significant when adjusting for age and pure-tone average. CONCLUSIONS We conclude that the abilities tapped into by the TRT test include processes relevant for speeded lexical decision making when completing partly masked sentences and that these processes require working memory capacity. Furthermore, the TRT is associated with the SRT of hearing aid users as estimated in a challenging condition that includes informational masking and with experienced difficulties with speech perception in daily-life conditions. The current results underline the value of using the TRT test in studies involving speech perception and aid in the interpretation of findings acquired using the test.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Marieke Pronk
- Section Ear & Hearing, Department of Otolaryngology/Head & Neck Surgery, Amsterdam Public Health Research Institute, VU University Medical Center, the Netherlands
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
| |
Collapse
|
82
|
Zhang M, Pratt SR, Doyle PJ, McNeil MR, Durrant JD, Roxberg J, Ortmann A. Audiological Assessment of Word Recognition Skills in Persons With Aphasia. Am J Audiol 2018; 27:1-18. [PMID: 29222555 DOI: 10.1044/2017_aja-17-0041] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2017] [Accepted: 08/01/2017] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to evaluate the ability of persons with aphasia, with and without hearing loss, to complete a commonly used open-set word recognition test that requires a verbal response. Furthermore, phonotactic probabilities and neighborhood densities of word recognition errors were assessed to explore potential underlying linguistic complexities that might differentially influence performance among groups. METHOD Four groups of adult participants were tested: participants with no brain injury with normal hearing, participants with no brain injury with hearing loss, participants with brain injury with aphasia and normal hearing, and participants with brain injury with aphasia and hearing loss. The Northwestern University Auditory Test No. 6 (NU-6; Tillman & Carhart, 1966) was administered. Those participants who were unable to respond orally (repeating words as heard) were assessed with the Picture Identification Task (Wilson & Antablin, 1980), permitting a picture-pointing response instead. Error patterns from the NU-6 were assessed to determine whether phonotactic probability influenced performance. RESULTS All participants with no brain injury and 72.7% of the participants with aphasia (24 out of 33) completed the NU-6. Furthermore, all participants who were unable to complete the NU-6 were able to complete the Picture Identification Task. There were significant group differences on NU-6 performance. The 2 groups with normal hearing had significantly higher scores than the 2 groups with hearing loss, but the 2 groups with normal hearing and the 2 groups with hearing loss did not differ from one another, implying that their performance was largely determined by hearing loss rather than by brain injury or aphasia. The neighborhood density, but not phonotactic probabilities, of the participants' errors differed across groups with and without aphasia. CONCLUSIONS Because the vast majority of the participants with aphasia examined could be tested readily using an instrument such as the NU-6, clinicians should not be reticent to use this test if patients are able to repeat single words, but routine use of alternative tests is encouraged for populations of people with brain injuries.
Collapse
Affiliation(s)
- Min Zhang
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Sheila R. Pratt
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Patrick J. Doyle
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Malcolm R. McNeil
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - John D. Durrant
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Jillyn Roxberg
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
| | - Amanda Ortmann
- Geriatric Research, Education, and Clinical Center, VA Pittsburgh Healthcare System, PA
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| |
Collapse
|
83
|
Autonomic Nervous System Reactivity During Speech Repetition Tasks: Heart Rate Variability and Skin Conductance. Ear Hear 2018; 37 Suppl 1:118S-25S. [PMID: 27355761 DOI: 10.1097/aud.0000000000000305] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Cognitive and emotional challenges may elicit a physiological stress response that can include arousal of the sympathetic nervous system (fight or flight response) and withdrawal of the parasympathetic nervous system (responsible for recovery and rest). This article reviews studies that have used measures of electrodermal activity (skin conductance) and heart rate variability (HRV) to index sympathetic and parasympathetic activity during auditory tasks. In addition, the authors present results from a new study with normal-hearing listeners examining the effects of speaking rate on changes in skin conductance and high-frequency HRV (HF-HRV). Sentence repetition accuracy for normal and fast speaking rates was measured in noise using signal to noise ratios that were adjusted to approximate 80% accuracy (+3 dB fast rate; 0 dB normal rate) while monitoring skin conductance and HF-HRV activity. A significant increase in skin conductance level (reflecting sympathetic nervous system arousal) and a decrease in HF-HRV (reflecting parasympathetic nervous system withdrawal) were observed with an increase in speaking rate indicating sensitivity of both measures to increased task demand. Changes in psychophysiological reactivity with increased auditory task demand may reflect differences in listening effort, but other person-related factors such as motivation and stress may also play a role. Further research is needed to understand how psychophysiological activity during listening tasks is influenced by the acoustic characteristics of stimuli, task demands, and by the characteristics and emotional responses of the individual.
Collapse
|
84
|
Neher T, Wagener KC, Fischer RL. Hearing aid noise suppression and working memory function. Int J Audiol 2018; 57:335-344. [DOI: 10.1080/14992027.2017.1423118] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Tobias Neher
- Medizinische Physik and Cluster of Excellence “Hearing4all”, Carl-von-Ossietzky University, Oldenburg, Germany,
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark,
| | | | | |
Collapse
|
85
|
Personalizing the Fitting of Hearing Aids by Learning Contextual Preferences From Internet of Things Data. COMPUTERS 2017. [DOI: 10.3390/computers7010001] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
86
|
Alicea CCM, Doherty KA. Motivation to Address Self-Reported Hearing Problems in Adults With Normal Hearing Thresholds. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3642-3655. [PMID: 29222566 DOI: 10.1044/2017_jslhr-h-17-0110] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/27/2017] [Accepted: 08/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE The purpose of this study was to compare the motivation to change in relation to hearing problems in adults with normal hearing thresholds but who report hearing problems and that of adults with a mild-to-moderate sensorineural hearing loss. Factors related to their motivation were also assessed. METHOD The motivation to change in relation to self-reported hearing problems was measured using the University of Rhode Island Change Assessment (McConnaughy, Prochaska, & Velicer, 1983). The relationship between objective and subjective measures and an adult's motivation was examined. RESULTS The level of hearing handicap did not differ significantly between adults with normal hearing who reported problems hearing in background noise and adults who had a mild-to-moderate sensorineural hearing loss. Hearing handicap, personal distress, and minimization of hearing loss were factors significantly related to motivation. Age, degree of hearing loss, speech-in-noise scores, working memory, and extended high-frequency average thresholds were not significantly related to their motivation. CONCLUSIONS Adults with normal hearing thresholds but self-reported hearing problems had the same level of hearing handicap and were equally motivated to take action for their hearing problems as age-matched adults with a mild-to-moderate sensorineural hearing loss. Hearing handicap, personal distress, and minimization of hearing loss were most strongly correlated with an individual's motivation to change.
Collapse
Affiliation(s)
- Carly C M Alicea
- Department of Communication Sciences and Disorders, Syracuse University, NY
| | - Karen A Doherty
- Department of Communication Sciences and Disorders, Syracuse University, NY
| |
Collapse
|
87
|
Chong FY, Jenstad LM. A critical review of hearing-aid single-microphone noise-reduction studies in adults and children. Disabil Rehabil Assist Technol 2017; 13:600-608. [PMID: 29072542 DOI: 10.1080/17483107.2017.1392619] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
PURPOSE Single-microphone noise reduction (SMNR) is implemented in hearing aids to suppress background noise. The purpose of this article was to provide a critical review of peer-reviewed studies in adults and children with sensorineural hearing loss who were fitted with hearing aids incorporating SMNR. METHOD Articles published between 2000 and 2016 were searched in PUBMED and EBSCO databases. RESULTS Thirty-two articles were included in the final review. Most studies with adult participants showed that SMNR has no effect on speech intelligibility. Positive results were reported for acceptance of background noise, preference, and listening effort. Studies of school-aged children were consistent with the findings of adult studies. No study with infants or young children of under 5 years old was found. Recent studies on noise-reduction systems not yet available in wearable hearing aids have documented benefits of noise reduction on memory for speech processing for older adults. CONCLUSIONS This evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort. Future research should test SMNR with infants and children who are younger than 5 years of age. Further development, testing, and clinical trials should be carried out on algorithms not yet available in wearable hearing aids. Testing higher cognitive level for speech processing and learning of novel sounds or words could show benefits of advanced signal processing features. These approaches should be expanded to other populations such as children and younger adults. Implications for rehabilitation The review provides a quick reference for students and clinicians regarding the efficacy and effectiveness of SMNR in wearable hearing aids. This information is useful during counseling session to build a realistic expectation among hearing aid users. Most studies in the adult population suggest that SMNR may provide some benefits to adult listeners in terms of listening comfort, acceptance of background noise, and release of cognitive load in a complex listening condition. However, it does not improve speech intelligibility. Studies that examined SMNR in the paediatric population suggest that SMNR may benefit older school-aged children, aged between 10 and 12 years old. The evidence supports the use of SMNR for adults and school-aged children when the aim is to improve listening comfort or reduce listening effort.
Collapse
Affiliation(s)
- Foong Yen Chong
- a School of Rehabilitation Sciences, Faculty of Health Sciences , Universiti Kebangsaan Malaysia , Kuala Lumpur , Malaysia.,b School of Audiology & Speech Sciences , University of British Columbia , Vancouver , British Columbia , Canada
| | - Lorienne M Jenstad
- b School of Audiology & Speech Sciences , University of British Columbia , Vancouver , British Columbia , Canada
| |
Collapse
|
88
|
Helfer KS, Merchant GR, Wasiuk PA. Age-Related Changes in Objective and Subjective Speech Perception in Complex Listening Environments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3009-3018. [PMID: 29049601 PMCID: PMC5945070 DOI: 10.1044/2017_jslhr-h-17-0030] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Revised: 05/03/2017] [Accepted: 05/03/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE A frequent complaint by older adults is difficulty communicating in challenging acoustic environments. The purpose of this work was to review and summarize information about how speech perception in complex listening situations changes across the adult age range. METHOD This article provides a review of age-related changes in speech understanding in complex listening environments and summarizes results from several studies conducted in our laboratory. RESULTS Both degree of high frequency hearing loss and cognitive test performance limit individuals' ability to understand speech in difficult listening situations as they age. The performance of middle-aged adults is similar to that of younger adults in the presence of noise maskers, but they experience substantially more difficulty when the masker is 1 or 2 competing speech messages. For the most part, middle-aged participants in studies conducted in our laboratory reported as much self-perceived hearing problems as did older adult participants. CONCLUSIONS Research supports the multifactorial nature of listening in real-world environments. Current audiologic assessment practices are often insufficient to identify the true speech understanding struggles that individuals experience in these situations. This points to the importance of giving weight to patients' self-reported difficulties. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601619.
Collapse
Affiliation(s)
- Karen S. Helfer
- Department of Communication Disorders, University of Massachusetts Amherst
| | | | - Peter A. Wasiuk
- Department of Communication Disorders, University of Massachusetts Amherst
| |
Collapse
|
89
|
Nakeva von Mentzer C, Sundström M, Enqvist K, Hällgren M. Assessing speech perception in Swedish school-aged children: preliminary data on the Listen–Say test. LOGOP PHONIATR VOCO 2017; 43:106-119. [DOI: 10.1080/14015439.2017.1380076] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Martina Sundström
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Karin Enqvist
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology/Section of Audiology, Linköping University Hospital, Linköping, Sweden
| |
Collapse
|
90
|
Hua H, Johansson B, Magnusson L, Lyxell B, Ellis RJ. Speech Recognition and Cognitive Skills in Bimodal Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2752-2763. [PMID: 28885638 DOI: 10.1044/2017_jslhr-h-16-0276] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2016] [Accepted: 05/21/2017] [Indexed: 05/12/2023]
Abstract
PURPOSE To examine the relation between speech recognition and cognitive skills in bimodal cochlear implant (CI) and hearing aid users. METHOD Seventeen bimodal CI users (28-74 years) were recruited to the study. Speech recognition tests were carried out in quiet and in noise. The cognitive tests employed included the Reading Span Test and the Trail Making Test (Daneman & Carpenter, 1980; Reitan, 1958, 1992), measuring working memory capacity and processing speed and executive functioning, respectively. Data were analyzed using paired-sample t tests, Pearson correlations, and partial correlations controlling for age. RESULTS The results indicate that performance on some cognitive tests predicts speech recognition and that bimodal listening generates a significant improvement in speech in quiet compared to unilateral CI listening. However, the current results also suggest that bimodal listening requires different cognitive skills than does unimodal CI listening. This is likely to relate to the relative difficulty of having to integrate 2 different signals and then map the integrated signal to representations stored in the long-term memory. CONCLUSIONS Even though participants obtained speech recognition benefit from bimodal listening, the results suggest that processing bimodal stimuli involves different cognitive skills than does unimodal conditions in quiet. Thus, clinically, it is important to consider this when assessing treatment outcomes.
Collapse
Affiliation(s)
- Håkan Hua
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Björn Johansson
- Department of Audiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Lennart Magnusson
- Department of Audiology, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Björn Lyxell
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Rachel J Ellis
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping, Sweden
- Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
91
|
Koelewijn T, Versfeld NJ, Kramer SE. Effects of attention on the speech reception threshold and pupil response of people with impaired and normal hearing. Hear Res 2017; 354:56-63. [PMID: 28869841 DOI: 10.1016/j.heares.2017.08.006] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/23/2017] [Revised: 08/21/2017] [Accepted: 08/25/2017] [Indexed: 11/26/2022]
Abstract
For people with hearing difficulties, following a conversation in a noisy environment requires substantial cognitive processing, which is often perceived as effortful. Recent studies with normal hearing (NH) listeners showed that the pupil dilation response, a measure of cognitive processing load, is affected by 'attention related' processes. How these processes affect the pupil dilation response for hearing impaired (HI) listeners remains unknown. Therefore, the current study investigated the effect of auditory attention on various pupil response parameters for 15 NH adults (median age 51 yrs.) and 15 adults with mild to moderate sensorineural hearing loss (median age 52 yrs.). Both groups listened to two different sentences presented simultaneously, one to each ear and partially masked by stationary noise. Participants had to repeat either both sentences or only one, for which they had to divide or focus attention, respectively. When repeating one sentence, the target sentence location (left or right) was either randomized or blocked across trials, which in the latter case allowed for a better spatial focus of attention. The speech-to-noise ratio was adjusted to yield about 50% sentences correct for each task and condition. NH participants had lower ('better') speech reception thresholds (SRT) than HI participants. The pupil measures showed no between-group effects, with the exception of a shorter peak latency for HI participants, which indicated a shorter processing time. Both groups showed higher SRTs and a larger pupil dilation response when two sentences were processed instead of one. Additionally, SRTs were higher and dilation responses were larger for both groups when the target location was randomized instead of fixed. We conclude that although HI participants could cope with less noise than the NH group, their ability to focus attention on a single talker, thereby improving SRTs and lowering cognitive processing load, was preserved. Shorter peak latencies could indicate that HI listeners adapt their listening strategy by not processing some information, which reduces processing time and thereby listening effort.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health Research Institute, VU University Medical Center, Amsterdam, The Netherlands.
| | - Niek J Versfeld
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health Research Institute, VU University Medical Center, Amsterdam, The Netherlands
| | - Sophia E Kramer
- Section Ear & Hearing, Department of Otolaryngology-Head and Neck Surgery and Amsterdam Public Health Research Institute, VU University Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
92
|
Miller CW, Stewart EK, Wu YH, Bishop C, Bentler RA, Tremblay K. Working Memory and Speech Recognition in Noise Under Ecologically Relevant Listening Conditions: Effects of Visual Cues and Noise Type Among Adults With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2310-2320. [PMID: 28744550 PMCID: PMC5829805 DOI: 10.1044/2017_jslhr-h-16-0284] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/12/2016] [Revised: 09/23/2016] [Accepted: 02/04/2017] [Indexed: 06/07/2023]
Abstract
PURPOSE This study evaluated the relationship between working memory (WM) and speech recognition in noise with different noise types as well as in the presence of visual cues. METHOD Seventy-six adults with bilateral, mild to moderately severe sensorineural hearing loss (mean age: 69 years) participated. Using a cross-sectional design, 2 measures of WM were taken: a reading span measure, and Word Auditory Recognition and Recall Measure (Smith, Pichora-Fuller, & Alexander, 2016). Speech recognition was measured with the Multi-Modal Lexical Sentence Test for Adults (Kirk et al., 2012) in steady-state noise and 4-talker babble, with and without visual cues. Testing was under unaided conditions. RESULTS A linear mixed model revealed visual cues and pure-tone average as the only significant predictors of Multi-Modal Lexical Sentence Test outcomes. Neither WM measure nor noise type showed a significant effect. CONCLUSION The contribution of WM in explaining unaided speech recognition in noise was negligible and not influenced by noise type or visual cues. We anticipate that with audibility partially restored by hearing aids, the effects of WM will increase. For clinical practice to be affected, more significant effect sizes are needed.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Erin K. Stewart
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Christopher Bishop
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
93
|
Yumba WK. Cognitive Processing Speed, Working Memory, and the Intelligibility of Hearing Aid-Processed Speech in Persons with Hearing Impairment. Front Psychol 2017; 8:1308. [PMID: 28861009 PMCID: PMC5559705 DOI: 10.3389/fpsyg.2017.01308] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2016] [Accepted: 07/17/2017] [Indexed: 11/13/2022] Open
Abstract
Previous studies have demonstrated that successful listening with advanced signal processing in digital hearing aids is associated with individual cognitive capacity, particularly working memory capacity (WMC). This study aimed to examine the relationship between cognitive abilities (cognitive processing speed and WMC) and individual listeners’ responses to digital signal processing settings in adverse listening conditions. A total of 194 native Swedish speakers (83 women and 111 men), aged 33–80 years (mean = 60.75 years, SD = 8.89), with bilateral, symmetrical mild to moderate sensorineural hearing loss who had completed a lexical decision speed test (measuring cognitive processing speed) and semantic word-pair span test (SWPST, capturing WMC) participated in this study. The Hagerman test (capturing speech recognition in noise) was conducted using an experimental hearing aid with three digital signal processing settings: (1) linear amplification without noise reduction (NoP), (2) linear amplification with noise reduction (NR), and (3) non-linear amplification without NR (“fast-acting compression”). The results showed that cognitive processing speed was a better predictor of speech intelligibility in noise, regardless of the types of signal processing algorithms used. That is, there was a stronger association between cognitive processing speed and NR outcomes and fast-acting compression outcomes (in steady state noise). We observed a weaker relationship between working memory and NR, but WMC did not relate to fast-acting compression. WMC was a relatively weaker predictor of speech intelligibility in noise. These findings might have been different if the participants had been provided with training and or allowed to acclimatize to binary masking noise reduction or fast-acting compression.
Collapse
Affiliation(s)
- Wycliffe Kabaywe Yumba
- Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden.,Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|
94
|
Perrone-Bertolotti M, Tassin M, Meunier F. Speech-in-speech perception and executive function involvement. PLoS One 2017; 12:e0180084. [PMID: 28708830 PMCID: PMC5510830 DOI: 10.1371/journal.pone.0180084] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2016] [Accepted: 06/11/2017] [Indexed: 11/24/2022] Open
Abstract
This present study investigated the link between speech-in-speech perception capacities and four executive function components: response suppression, inhibitory control, switching and working memory. We constructed a cross-modal semantic priming paradigm using a written target word and a spoken prime word, implemented in one of two concurrent auditory sentences (cocktail party situation). The prime and target were semantically related or unrelated. Participants had to perform a lexical decision task on visual target words and simultaneously listen to only one of two pronounced sentences. The attention of the participant was manipulated: The prime was in the pronounced sentence listened to by the participant or in the ignored one. In addition, we evaluate the executive function abilities of participants (switching cost, inhibitory-control cost and response-suppression cost) and their working memory span. Correlation analyses were performed between the executive and priming measurements. Our results showed a significant interaction effect between attention and semantic priming. We observed a significant priming effect in the attended but not in the ignored condition. Only priming effects obtained in the ignored condition were significantly correlated with some of the executive measurements. However, no correlation between priming effects and working memory capacity was found. Overall, these results confirm, first, the role of attention for semantic priming effect and, second, the implication of executive functions in speech-in-noise understanding capacities.
Collapse
Affiliation(s)
| | - Maxime Tassin
- Univ. Claude Bernard Lyon I, CNRS, L2C2, Lyon, France
| | - Fanny Meunier
- Univ. Claude Bernard Lyon I, CNRS, L2C2, Lyon, France
- Univ. Côte d’Azur, CNRS, BCL, Nice, France
- * E-mail:
| |
Collapse
|
95
|
Rosemann S, Gießing C, Özyurt J, Carroll R, Puschmann S, Thiel CM. The Contribution of Cognitive Factors to Individual Differences in Understanding Noise-Vocoded Speech in Young and Older Adults. Front Hum Neurosci 2017. [PMID: 28638329 PMCID: PMC5461255 DOI: 10.3389/fnhum.2017.00294] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022] Open
Abstract
Noise-vocoded speech is commonly used to simulate the sensation after cochlear implantation as it consists of spectrally degraded speech. High individual variability exists in learning to understand both noise-vocoded speech and speech perceived through a cochlear implant (CI). This variability is partly ascribed to differing cognitive abilities like working memory, verbal skills or attention. Although clinically highly relevant, up to now, no consensus has been achieved about which cognitive factors exactly predict the intelligibility of speech in noise-vocoded situations in healthy subjects or in patients after cochlear implantation. We aimed to establish a test battery that can be used to predict speech understanding in patients prior to receiving a CI. Young and old healthy listeners completed a noise-vocoded speech test in addition to cognitive tests tapping on verbal memory, working memory, lexicon and retrieval skills as well as cognitive flexibility and attention. Partial-least-squares analysis revealed that six variables were important to significantly predict vocoded-speech performance. These were the ability to perceive visually degraded speech tested by the Text Reception Threshold, vocabulary size assessed with the Multiple Choice Word Test, working memory gauged with the Operation Span Test, verbal learning and recall of the Verbal Learning and Retention Test and task switching abilities tested by the Comprehensive Trail-Making Test. Thus, these cognitive abilities explain individual differences in noise-vocoded speech understanding and should be considered when aiming to predict hearing-aid outcome.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Carsten Gießing
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Rebecca Carroll
- Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Institute of Dutch Studies, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Sebastian Puschmann
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany.,Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität OldenburgOldenburg, Germany
| |
Collapse
|
96
|
Purdy SC, Welch D, Giles E, Morgan CLA, Tenhagen R, Kuruvilla-Mathew A. Impact of cognition and noise reduction on speech perception in adults with unilateral cochlear implants. Cochlear Implants Int 2017; 18:162-170. [PMID: 28335695 DOI: 10.1080/14670100.2017.1299393] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES The purpose of this study was to investigate the impact of cognition and noise reduction (NR) technology in cochlear implants (CIs) on speech perception and listening effort. METHODS Thirteen adults fitted with unilateral CIs (Nucleus® 6, CP900) participated in this study. Participants performed: (I) cognitive tests of working memory and processing speed, (II) speech perception in noise tests, and (III) an auditory-visual dual-task paradigm to quantify listening effort, as a part of the three-phase experimental study. Both the participant and the tester, performing the outcome measures, were blinded to the NR settings (ON/OFF) of the CI for phases II and III. RESULTS Speech intelligibility significantly improved with the NR activated, but was independent of individual differences in cognitive abilities. Listening effort did not significantly change with NR setting; however, there was a trend for participants with good working memory to have better speech perception scores with NR activated during the effortful listening task (dual-task paradigm). CONCLUSION Future studies are warranted to explore the interaction between cognition and CI NR algorithms during an effortful listening task.
Collapse
Affiliation(s)
- Suzanne Carolyn Purdy
- a Speech Science, Faculty of Science , University of Auckland , Auckland , New Zealand.,b Eisdell Moore Centre, Hearing and Balance Research , New Zealand
| | - David Welch
- b Eisdell Moore Centre, Hearing and Balance Research , New Zealand.,c Audiology, Faculty of Medical & Health Sciences , University of Auckland , Auckland , New Zealand
| | - Ellen Giles
- b Eisdell Moore Centre, Hearing and Balance Research , New Zealand.,c Audiology, Faculty of Medical & Health Sciences , University of Auckland , Auckland , New Zealand
| | | | - Renique Tenhagen
- a Speech Science, Faculty of Science , University of Auckland , Auckland , New Zealand
| | - Abin Kuruvilla-Mathew
- a Speech Science, Faculty of Science , University of Auckland , Auckland , New Zealand.,b Eisdell Moore Centre, Hearing and Balance Research , New Zealand
| |
Collapse
|
97
|
Ward CM, Rogers CS, Van Engen KJ, Peelle JE. Effects of Age, Acoustic Challenge, and Verbal Working Memory on Recall of Narrative Speech. Exp Aging Res 2016; 42:97-111. [PMID: 26683044 DOI: 10.1080/0361073x.2016.1108785] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
BACKGROUND/STUDY CONTEXT A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.
Collapse
Affiliation(s)
- Caitlin M Ward
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Chad S Rogers
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Kristin J Van Engen
- b Department of Psychology , Washington University in St. Louis , St. Louis , Missouri , USA
| | - Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
98
|
Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, Hällgren M, Larsby B, Signoret C, Pichora-Fuller MK, Rudner M, Danielsson H, Stenfelt S. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol 2016; 55:623-42. [PMID: 27589015 PMCID: PMC5044772 DOI: 10.1080/14992027.2016.1219775] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/29/2016] [Accepted: 07/29/2016] [Indexed: 02/08/2023]
Abstract
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS All LEVEL 2 factors are important theoretically as well as for clinical assessment.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
- Eriksholm Research Centre,
Oticon A/S, Rørtangvej 20, 3070 Snekkersten,
Denmark
| | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Björn Lidestam
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Adriana Agatha Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute, VU University Medical Center,
Amsterdam,
The Netherlands
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Building, Energy and Environmental Engineering, University of Gävle,
Gävle,
Sweden
| | - Björn Lyxell
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Ulf Träff
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Wycliffe Yumba
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Elisabet Classon
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - M. Kathleen Pichora-Fuller
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Psychology, University of Toronto,
Toronto,
Ontario,
Canada
- The Toronto Rehabilitation Institute, University Health Network,
Toronto,
Ontario,
Canada
- The Rotman Research Institute, Baycrest Hospital,
Toronto,
Ontario,
Canada
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Stefan Stenfelt
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| |
Collapse
|
99
|
Development of the Word Auditory Recognition and Recall Measure: A Working Memory Test for Use in Rehabilitative Audiology. Ear Hear 2016; 37:e360-e376. [DOI: 10.1097/aud.0000000000000329] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
100
|
Schepker H, Haeder K, Rennies J, Holube I. Perceived listening effort and speech intelligibility in reverberation and noise for hearing-impaired listeners. Int J Audiol 2016; 55:738-747. [DOI: 10.1080/14992027.2016.1219774] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Henning Schepker
- Signal Processing Group, Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany,
- Cluster of Excellence “Hearing4All”, Oldenburg, Germany,
| | - Kristina Haeder
- Cluster of Excellence “Hearing4All”, Oldenburg, Germany,
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany, and
| | - Jan Rennies
- Cluster of Excellence “Hearing4All”, Oldenburg, Germany,
- Project Group Hearing, Speech and Audio Technology, Fraunhofer Institute for Digital Media Technology IDMT, Oldenburg, Germany
| | - Inga Holube
- Cluster of Excellence “Hearing4All”, Oldenburg, Germany,
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany, and
| |
Collapse
|