1
|
Kondaurova MV, Smith A, Mishra R, Zheng Q, Kondaurova I, Francis AL, Sallee E. Empatica E4 Assessment of Child Physiological Measures of Listening Effort During Remote and In-Person Communication. Am J Audiol 2024:1-10. [PMID: 39374495 DOI: 10.1044/2024_aja-24-00078] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/09/2024] Open
Abstract
PURPOSE Telepractice is a growing service model that delivers aural rehabilitation to deaf and hard-of hearing children via telecommunications technology. Despite known benefits of telepractice, this delivery approach may increase patients' listening effort (LE) characterized as an allocation of cognitive resources toward an auditory task. The study tested techniques for collecting physiological measures of LE in normal-hearing (NH) children during remote (referred to as tele-) and in-person communication using the wearable Empatica E4 wristband. METHOD Participants were 10 children (age range: 9-12 years old) who came to two tele- and two in-person weekly sessions, order counterbalanced. During each session, the children heard a short passage read by the clinical provider, completed an auditory passage comprehension task, and self-rated their effort as a part of the larger study. Measures of electrodermal activity and blood volume pulse amplitude were collected from the child E4 wristband. RESULTS No differences in child subjective, physiological measures of LE or passage comprehension scores were found between in-person sessions and telesessions. However, an effect of treatment duration on subjective and physiological measures of LE was identified. Children self-reported a significant increase in LE over time. However, their physiological measures demonstrated a trend indicating a decrease in LE. A significant association between subjective measures and the passage comprehension task was found suggesting that those children who reported more effort demonstrated a higher proportion of correct responses. CONCLUSIONS The study demonstrated the feasibility of collection of physiological measures of LE in NH children during remote and in-person communication using the E4 wristband. The results suggest that measures of LE are multidimensional and may reflect different sources of, or cognitive responses to, increased listening demand. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27122064.
Collapse
Affiliation(s)
- Maria V Kondaurova
- Department of Psychological and Brain Sciences, University of Louisville, KY
| | - Alan Smith
- Department of Otolaryngology Head and Neck Surgery and Communicative Disorders, University of Louisville, KY
| | - Ruchik Mishra
- Department of Electrical & Computer Engineering, J.B. Speed School of Engineering, University of Louisville, KY
| | - Qi Zheng
- Department of Bioinformatics and Biostatistics, School of Public Health & Information Sciences, University of Louisville, KY
| | - Irina Kondaurova
- Department of Bioinformatics and Biostatistics, School of Public Health & Information Sciences, University of Louisville, KY
| | - Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN
| | - Emily Sallee
- Department of Otolaryngology Head and Neck Surgery and Communicative Disorders, University of Louisville, KY
| |
Collapse
|
2
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
3
|
Fernandez LB, Pickering MJ, Naylor G, Hadley LV. Uses of Linguistic Context in Speech Listening: Does Acquired Hearing Loss Lead to Reduced Engagement of Prediction? Ear Hear 2024; 45:1107-1114. [PMID: 38880953 PMCID: PMC11325976 DOI: 10.1097/aud.0000000000001515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/01/2024] [Indexed: 06/18/2024]
Abstract
Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.
Collapse
Affiliation(s)
- Leigh B. Fernandez
- Department of Social Sciences, Psycholinguistics Group, University of Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Martin J. Pickering
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Lauren V. Hadley
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
4
|
Nittrouer S. How Hearing Loss and Cochlear Implantation Affect Verbal Working Memory: Evidence From Adolescents. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1850-1867. [PMID: 38713817 PMCID: PMC11192562 DOI: 10.1044/2024_jslhr-23-00446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Revised: 11/09/2023] [Accepted: 03/20/2024] [Indexed: 05/09/2024]
Abstract
PURPOSE Verbal working memory is poorer for children with hearing loss than for peers with normal hearing (NH), even with cochlear implantation and early intervention. Poor verbal working memory can affect academic performance, especially in higher grades, making this deficit a significant problem. This study examined the stability of verbal working memory across middle childhood, tested working memory in adolescents with NH or cochlear implants (CIs), explored whether signal enhancement can improve verbal working memory, and tested two hypotheses proposed to explain the poor verbal working memory of children with hearing loss: (a) Diminished auditory experience directly affects executive functions, including working memory; (b) degraded auditory inputs inhibit children's abilities to recover the phonological structure needed for encoding verbal material into storage. DESIGN Fourteen-year-olds served as subjects: 55 with NH; 52 with CIs. Immediate serial recall tasks were used to assess working memory. Stimuli consisted of nonverbal, spatial stimuli and four kinds of verbal, acoustic stimuli: nonrhyming and rhyming words, and nonrhyming words with two kinds of signal enhancement: audiovisual and indexical. Analyses examined (a) stability of verbal working memory across middle childhood, (b) differences in verbal and nonverbal working memory, (c) effects of signal enhancement on recall, (d) phonological processing abilities, and (e) source of the diminished verbal working memory in adolescents with cochlear implants. RESULTS Verbal working memory remained stable across middle childhood. Adolescents across groups performed similarly for nonverbal stimuli, but those with CIs displayed poorer recall accuracy for verbal stimuli; signal enhancement did not improve recall. Poor phonological sensitivity largely accounted for the group effect. CONCLUSIONS The central executive for working memory is not affected by hearing loss or cochlear implantation. Instead, the phonological deficit faced by adolescents with CIs denigrates the representation in storage and augmenting the signal does not help.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
5
|
Ben-David BM, Chebat DR, Icht M. "Love looks not with the eyes": supranormal processing of emotional speech in individuals with late-blindness versus preserved processing in individuals with congenital-blindness. Cogn Emot 2024:1-14. [PMID: 38785380 DOI: 10.1080/02699931.2024.2357656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Accepted: 05/11/2024] [Indexed: 05/25/2024]
Abstract
Processing of emotional speech in the absence of visual information relies on two auditory channels: semantics and prosody. No study to date has investigated how blindness impacts this process. Two theories, Perceptual Deficit, and Sensory Compensation, yiled different expectations about the role of visual experience (or its lack thereof) in processing emotional speech. To test the effect of vision and early visual experience on processing of emotional speech, we compared individuals with congenital blindness (CB, n = 17), individuals with late blindness (LB, n = 15), and sighted controls (SC, n = 21) on identification and selective-attention of semantic and prosodic spoken-emotions. Results showed that individuals with blindness performed at least as well as SC, supporting Sensory Compensation and the role of cortical reorganisation. Individuals with LB outperformed individuals with CB, in accordance with Perceptual Deficit, supporting the role of early visual experience. The LB advantage was moderated by executive functions (working-memory). Namely, the advantage was erased for individuals with CB who showed higher levels of executive functions. Results suggest that vision is not necessary for processing of emotional speech, but early visual experience could improve it. The findings support a combination of the two aforementioned theories and reject a dichotomous view of deficiencies/enhancements of blindness.
Collapse
Affiliation(s)
- Boaz M Ben-David
- Communication, Aging, and Neuropsychology Lab (CANlab), Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, Canada
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, Canada
| | - Daniel-Robert Chebat
- Visual and Cognitive Neuroscience Laboratory (VCN Lab), The Department of Psychology, Ariel University, Ariel, Israel
- Navigation and Accessibility Research Center (NARCA), Ariel University, Ariel, Israel
| | - Michal Icht
- Department of Communication Disorders, Ariel University, Ariel, Israel
| |
Collapse
|
6
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
7
|
Oliveira GMGF, de Melo DC, Serra LSM, Granjeiro RC, Sampaio ALL. Dysphonia Interference in Schoolteachers' Speech Intelligibility in the Classroom. J Voice 2024; 38:316-324. [PMID: 34772594 DOI: 10.1016/j.jvoice.2021.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 08/31/2021] [Accepted: 09/07/2021] [Indexed: 11/28/2022]
Abstract
"Among the most common occupations, schooteachers are the ones who experience the most changes throughout their career. Considering this, the present study aims to verify whether dysphonia in three different degrees may compromise the speech intelligibility of schoolteachers in the classroom. METHOD Overall, 39 students, average age 10 years, randomly selected from a public school in the Federal District, Brazil (Distrito Federal, Brasil) performed a transcription task of 20 sentences spoken by four distinct female voices in a classroom, one with a control voice (normal), another with mild dysphonia, 1 with moderate dysphonia and another with severe dysphonia. None of the voices in the study presented changes, neither in fluency nor articulation nor neurological changes. The sentences were previously recorded in an acoustically treated booth, with a microphone on a pedestal 5 cm away from the speaker's mouth. For each sentence to be recorded, the speech model was provided by the speech therapist and then repeated by the speaker according to the model. Each voice recorded 5 different sentences, phonetically balanced and with equivalent number of words. The students included in the study underwent auditory, auditory processing, sequential memory for verbal sounds and sound source location tests, fulfilling the normality criteria. They also did not have neurological or motor disorders or learning, speech or language disorders. Academic success was also taken into account. For the experiment, a speaker was placed in front of the classroom, 1 m from the wall and 1 m from the floor, and students were randomly assigned to the classroom seats. After listening to each sentence, some time was assigned for its transcription by each student. RESULTS The occurrence of errors was higher in voices with moderate and severe dysphonia, in which a significant difference was found (P ≤0.003) showing that voices with moderate and severe dysphonia were less intelligible than the normal voice (control voice). No difference was found between the normal voice and the mild dysphonic voice. Binary logistic regression analysis also showed that students had a 2.55 times higher chance of making mistakes with moderate dysphonic voice (P ≤0.011), and that this chance was 3.06 times greater for severe dysphonic voice (P ≤0.002) when compared to the normal voice (control voice). CONCLUSION Moderate and severe dysphonia in the voices of schoolteachers interferes with the intelligibility of students, and the greater the degree of dysphonia of the teacher, the greater the chance that the student will make intelligibility errors."
Collapse
|
8
|
Plain B, Pielage H, Kramer SE, Richter M, Saunders GH, Versfeld NJ, Zekveld AA, Bhuiyan TA. Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening. Trends Hear 2024; 28:23312165241232551. [PMID: 38549351 PMCID: PMC10981225 DOI: 10.1177/23312165241232551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 04/01/2024] Open
Abstract
In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.
Collapse
Affiliation(s)
- Bethany Plain
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Hidde Pielage
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Sophia E. Kramer
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Michael Richter
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | - Gabrielle H. Saunders
- Manchester Centre for Audiology and Deafness (ManCAD), University of Manchester, Manchester, UK
| | - Niek J. Versfeld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Adriana A. Zekveld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | | |
Collapse
|
9
|
Van Wilderode M, Van Humbeeck N, Krampe R, van Wieringen A. Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults. Trends Hear 2024; 28:23312165241260621. [PMID: 39053897 PMCID: PMC11282555 DOI: 10.1177/23312165241260621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/21/2024] [Accepted: 05/23/2024] [Indexed: 07/27/2024] Open
Abstract
While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.
Collapse
Affiliation(s)
- Mira Van Wilderode
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven, Leuven, Belgium
| | | | - Ralf Krampe
- Brain & Cognition Group, University of Leuven (KU Leuven), Leuven, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven, Leuven, Belgium
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
10
|
Slugocki C, Kuk F, Korhonen P. Alpha-Band Dynamics of Hearing Aid Wearers Performing the Repeat-Recall Test (RRT). Trends Hear 2024; 28:23312165231222098. [PMID: 38549287 PMCID: PMC10981257 DOI: 10.1177/23312165231222098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/28/2023] [Accepted: 12/06/2023] [Indexed: 04/01/2024] Open
Abstract
This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Francis Kuk
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Petri Korhonen
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| |
Collapse
|
11
|
Shin J, Noh S, Park J, Sung JE. Syntactic complexity differentially affects auditory sentence comprehension performance for individuals with age-related hearing loss. Front Psychol 2023; 14:1264994. [PMID: 37965654 PMCID: PMC10641445 DOI: 10.3389/fpsyg.2023.1264994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Objectives This study examined whether older adults with hearing loss (HL) experience greater difficulties in auditory sentence comprehension compared to those with typical-hearing (TH) when the linguistic burdens of syntactic complexity were systematically manipulated by varying either the sentence type (active vs. passive) or sentence length (3- vs. 4-phrases). Methods A total of 22 individuals with HL and 24 controls participated in the study, completing sentence comprehension test (SCT), standardized memory assessments, and pure-tone audiometry tests. Generalized linear mixed effects models were employed to compare the effects of sentence type and length on SCT accuracy, while Pearson correlation coefficients were conducted to explore the relationships between SCT accuracy and other factors. Additionally, stepwise regression analyses were employed to identify memory-related predictors of sentence comprehension ability. Results Older adults with HL exhibited poorer performance on passive sentences than on active sentences compared to controls, while the sentence length was controlled. Greater difficulties on passive sentences were linked to working memory capacity, emerging as the most significant predictor for the comprehension of passive sentences among participants with HL. Conclusion Our findings contribute to the understanding of the linguistic-cognitive deficits linked to age-related hearing loss by demonstrating its detrimental impact on the processing of passive sentences. Cognitively healthy adults with hearing difficulties may face challenges in comprehending syntactically more complex sentences that require higher computational demands, particularly in working memory allocation.
Collapse
Affiliation(s)
| | | | | | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
12
|
Shetty HN, Raju S, Singh S S. The relationship between age, acceptable noise level, and listening effort in middle-aged and older-aged individuals. J Otol 2023; 18:220-229. [PMID: 37877073 PMCID: PMC10593579 DOI: 10.1016/j.joto.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
Objective The purpose of the study was to evaluate listening effort in adults who experience varied annoyance towards noise. Materials and methods Fifty native Kannada-speaking adults aged 41-68 years participated. We evaluated the participant's acceptable noise level while listening to speech. Further, a sentence-final word-identification and recall test at 0 dB SNR (less favorable condition) and 4 dB SNR (relatively favorable condition) was used to assess listening effort. The repeat and recall scores were obtained for each condition. Results The regression model revealed that the listening effort increased by 0.6% at 0 dB SNR and by 0.5% at 4 dB SNR with every one-year advancement in age. Listening effort increased by 0.9% at 0 dB SNR and by 0.7% at 4 dB SNR with every one dB change in the value of Acceptable Noise Level (ANL). At 0 dB SNR and 4 dB SNR, a moderate and mild negative correlation was noted respectively between listening effort and annoyance towards noise when the factor age was controlled. Conclusion Listening effort increases with age, and its effect is more in less favorable than in relatively favorable conditions. However, if the annoyance towards noise was controlled, the impact of age on listening effort was reduced. Listening effort correlated with the level of annoyance once the age effect was controlled. Furthermore, the listening effort was predicted from the ANL to a moderate degree.
Collapse
Affiliation(s)
| | - Suma Raju
- Department of Speech-Language Pathology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| | - Sanjana Singh S
- Department of Audiology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| |
Collapse
|
13
|
Wagner L, Werle ALA, Hoffmann A, Rahne T, Fengler A. Is there an influence of perceptual or cognitive impairment on complex sentence processing in hearing aid users? PLoS One 2023; 18:e0291832. [PMID: 37768903 PMCID: PMC10538791 DOI: 10.1371/journal.pone.0291832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 09/06/2023] [Indexed: 09/30/2023] Open
Abstract
BACKGROUND Hearing-impaired listeners often have difficulty understanding complex sentences. It is not clear if perceptual or cognitive deficits have more impact on reduced language processing abilities, and how a hearing aid might compensate for that. METHODS In a prospective study with 5 hearing aid users and 5 normal hearing, age-matched participants, processing of complex sentences was investigated. Audiometric and working memory tests were performed. Subject- and object-initial sentences from the Oldenburg Corpus of Linguistically and audiologically controlled Sentences (OLACS) were presented to the participants during recording of an electroencephalogram (EEG). RESULTS The perceptual difference between object and subject leading sentences does not lead to processing changes whereas the ambiguity in object leading sentences with feminine or neuter articles evokes a P600 potential. For hearing aid users, this P600 has a longer latency compared to normal hearing subjects. CONCLUSION The EEG is a suitable method for investigating differences in complex speech processing for hearing aid users. Longer P600 latencies indicate higher cognitive effort for processing complex sentences in hearing aid users.
Collapse
Affiliation(s)
- Luise Wagner
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Anna-Leoni A. Werle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Antonia Hoffmann
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Torsten Rahne
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Anja Fengler
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Leipzig, Leipzig, Germany
| |
Collapse
|
14
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
15
|
Harris MS, Hamel BL, Wichert K, Kozlowski K, Mleziva S, Ray C, Pisoni DB, Kronenberger WG, Moberly AC. Contribution of Verbal Learning & Memory and Spectro-Temporal Discrimination to Speech Recognition in Cochlear Implant Users. Laryngoscope 2023; 133:661-669. [PMID: 35567421 PMCID: PMC9659673 DOI: 10.1002/lary.30210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2021] [Revised: 04/01/2022] [Accepted: 05/03/2022] [Indexed: 11/10/2022]
Abstract
OBJECTIVES Existing cochlear implant (CI) outcomes research demonstrates a high degree of variability in device effectiveness among experienced CI users. Increasing evidence suggests that verbal learning and memory (VL&M) may have an influence on speech recognition with CIs. This study examined the relations in CI users between visual measures of VL&M and speech recognition in a series of models that also incorporated spectro-temporal discrimination. Predictions were that (1) speech recognition would be associated with VL&M abilities and (2) VL&M would contribute to speech recognition outcomes above and beyond spectro-temporal discrimination in multivariable models of speech recognition. METHODS This cross-sectional study included 30 adult postlingually deaf experienced CI users who completed a nonauditory visual version of the California Verbal Learning Test-Second Edition (v-CVLT-II) to assess VL&M, and the Spectral-Temporally Modulated Ripple Test (SMRT), an auditory measure of spectro-temporal processing. Participants also completed a battery of word and sentence recognition tasks. RESULTS CI users showed significant correlations between some v-CVLT-II measures (short-delay free- and cued-recall, retroactive interference, and "subjective" organizational recall strategies) and speech recognition measures. Performance on the SMRT was correlated with all speech recognition measures. Hierarchical multivariable linear regression analyses showed that SMRT performance accounted for a significant degree of speech recognition outcome variance. Moreover, for all speech recognition measures, VL&M scores contributed independently in addition to SMRT. CONCLUSION Measures of spectro-temporal discrimination and VL&M were associated with speech recognition in CI users. After accounting for spectro-temporal discrimination, VL&M contributed independently to performance on measures of speech recognition for words and sentences produced by single and multiple talkers. LEVEL OF EVIDENCE 3 Laryngoscope, 133:661-669, 2023.
Collapse
Affiliation(s)
- Michael S. Harris
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
- Department of Neurosurgery, Medical College of Wisconsin, Milwaukee, WI
| | | | - Kristin Wichert
- Department of Communication Sciences & Disorders, University of Wisconsin - Eau Claire, Eau Claire, WI
| | - Kristin Kozlowski
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - Sarah Mleziva
- Department of Otolaryngology & Communication Sciences, Medical College of Wisconsin, Milwaukee, WI
| | - Christin Ray
- Department of Otolaryngology – Head & Neck Surgery, The Ohio State Wexner Medical Center, Columbus, OH
| | - David B. Pisoni
- Speech Research Laboratory, Department of Psychology, Indiana University, Bloomington, IN
| | | | - Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, The Ohio State Wexner Medical Center, Columbus, OH
| |
Collapse
|
16
|
Beckers L, Tromp N, Philips B, Mylanus E, Huinck W. Exploring neurocognitive factors and brain activation in adult cochlear implant recipients associated with speech perception outcomes-A scoping review. Front Neurosci 2023; 17:1046669. [PMID: 36816114 PMCID: PMC9932917 DOI: 10.3389/fnins.2023.1046669] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/05/2023] [Indexed: 02/05/2023] Open
Abstract
Background Cochlear implants (CIs) are considered an effective treatment for severe-to-profound sensorineural hearing loss. However, speech perception outcomes are highly variable among adult CI recipients. Top-down neurocognitive factors have been hypothesized to contribute to this variation that is currently only partly explained by biological and audiological factors. Studies investigating this, use varying methods and observe varying outcomes, and their relevance has yet to be evaluated in a review. Gathering and structuring this evidence in this scoping review provides a clear overview of where this research line currently stands, with the aim of guiding future research. Objective To understand to which extent different neurocognitive factors influence speech perception in adult CI users with a postlingual onset of hearing loss, by systematically reviewing the literature. Methods A systematic scoping review was performed according to the PRISMA guidelines. Studies investigating the influence of one or more neurocognitive factors on speech perception post-implantation were included. Word and sentence perception in quiet and noise were included as speech perception outcome metrics and six key neurocognitive domains, as defined by the DSM-5, were covered during the literature search (Protocol in open science registries: 10.17605/OSF.IO/Z3G7W of searches in June 2020, April 2022). Results From 5,668 retrieved articles, 54 articles were included and grouped into three categories using different measures to relate to speech perception outcomes: (1) Nineteen studies investigating brain activation, (2) Thirty-one investigating performance on cognitive tests, and (3) Eighteen investigating linguistic skills. Conclusion The use of cognitive functions, recruiting the frontal cortex, the use of visual cues, recruiting the occipital cortex, and the temporal cortex still available for language processing, are beneficial for adult CI users. Cognitive assessments indicate that performance on non-verbal intelligence tasks positively correlated with speech perception outcomes. Performance on auditory or visual working memory, learning, memory and vocabulary tasks were unrelated to speech perception outcomes and performance on the Stroop task not to word perception in quiet. However, there are still many uncertainties regarding the explanation of inconsistent results between papers and more comprehensive studies are needed e.g., including different assessment times, or combining neuroimaging and behavioral measures. Systematic review registration https://doi.org/10.17605/OSF.IO/Z3G7W.
Collapse
Affiliation(s)
- Loes Beckers
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands,*Correspondence: Loes Beckers,
| | - Nikki Tromp
- Cochlear Ltd., Mechelen, Belgium,Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | | | - Emmanuel Mylanus
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| | - Wendy Huinck
- Department of Otorhinolaryngology, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Center, Nijmegen, Netherlands
| |
Collapse
|
17
|
Lemel R, Shalev L, Nitsan G, Ben-David BM. Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 133:104401. [PMID: 36577332 DOI: 10.1016/j.ridd.2022.104401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/23/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Collapse
Affiliation(s)
- Rony Lemel
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Lilach Shalev
- Constantiner School of Education and Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| | - Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada; Toronto Rehabilitation Institute, University Health Networks (UHN), ON, Canada.
| |
Collapse
|
18
|
Moberly AC, Varadarajan VV, Tamati TN. Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:365-381. [PMID: 36475738 PMCID: PMC10023188 DOI: 10.1044/2022_jslhr-22-00184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE When listening to speech under adverse conditions, older adults, even with "age-normal" hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demonstrate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compensate using "top-down" cognitive-linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. METHOD Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence materials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capacity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. RESULTS The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recognition. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. CONCLUSIONS Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | | | - Terrin N. Tamati
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
| |
Collapse
|
19
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
20
|
Homman L, Danielsson H, Rönnberg J. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model. Front Psychol 2023; 14:1015227. [PMID: 36936006 PMCID: PMC10020708 DOI: 10.3389/fpsyg.2023.1015227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Objective The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.
Collapse
Affiliation(s)
- Lina Homman
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
- *Correspondence: Lina Homman,
| | - Henrik Danielsson
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
21
|
Li M, Chen X, Zhu J, Chen F. Audiovisual Mandarin Lexical Tone Perception in Quiet and Noisy Contexts: The Influence of Visual Cues and Speech Rate. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4385-4403. [PMID: 36269618 DOI: 10.1044/2022_jslhr-22-00024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Armed with the theory of embodied cognition proposing tight interactions between perception, motor, and cognition, this study aimed to test the hypothesis that speech rate-altered Mandarin lexical tone perception in quiet and noisy environments could be affected by the bodily dynamic cross-modal information. METHOD Fifty-three adult listeners completed a Mandarin tone perception task with 720 tone stimuli in auditory-only (AO), auditory-facial (AF), and auditory-facial-plus-gestural (AFG) modalities, at fast, normal, and slow speech rates under quiet and noisy conditions. In AF and AFG modalities, both congruent and incongruent audiovisual information were designed and presented. Generalized linear mixed-effects models were constructed to analyze the accuracy of tone perception across different conditions. RESULTS In Mandarin tone perception, the magnitude of enhancement of AF and AFG cues across three speech rates was significantly higher than that of the AO cue in the adverse context of noise, yet additional metaphoric gestures did not show significant differences from the facial information. Furthermore, the performance of auditory tone perception at the fast speech rate was significantly better than that at the normal speech rate when the inputs were incongruent between auditory and visual channels in quiet. CONCLUSIONS This study provided compelling evidence showing that integrated audiovisual information plays a vital role not only in improving lexical tone perception in noise but also in modulating the effects of speech rate on Mandarin tone perception in quiet for native listeners. Our findings, supporting the theory of embodied cognition, are implicational for speech and hearing rehabilitation among both young and old clinical populations.
Collapse
Affiliation(s)
- Manhong Li
- School of Foreign Languages, Hunan University, Changsha, China
- School of Foreign Languages, Hunan First Normal University, Changsha, China
| | - Xiaoxiang Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Jiaqiang Zhu
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| |
Collapse
|
22
|
Carter BL, Apoux F, Healy EW. The Influence of Noise Type and Semantic Predictability on Word Recall in Older Listeners and Listeners With Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3548-3565. [PMID: 35973100 PMCID: PMC9913215 DOI: 10.1044/2022_jslhr-22-00075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 05/01/2022] [Accepted: 05/11/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE A dual-task paradigm was implemented to investigate how noise type and sentence context may interact with age and hearing loss to impact word recall during speech recognition. METHOD Three noise types with varying degrees of temporal/spectrotemporal modulation were used: speech-shaped noise, speech-modulated noise, and three-talker babble. Participant groups included younger listeners with normal hearing (NH), older listeners with near-normal hearing, and older listeners with sensorineural hearing loss. An adaptive measure was used to establish the signal-to-noise ratio approximating 70% sentence recognition for each participant in each noise type. A word-recall task was then implemented while matching speech-recognition performance across noise types and participant groups. Random-intercept linear mixed-effects models were used to determine the effects of and interactions between noise type, sentence context, and participant group on word recall. RESULTS The results suggest that noise type does not significantly impact word recall when word-recognition performance is controlled. When data from noise types were pooled and compared with quiet, and recall was assessed: older listeners with near-normal hearing performed well when either quiet backgrounds or high sentence context (or both) were present, but older listeners with hearing loss performed well only when both quiet backgrounds and high sentence context were present. Younger listeners with NH were robust to the detrimental effects of noise and low context. CONCLUSIONS The general presence of noise has the potential to decrease word recall, but type of noise does not appear to significantly impact this observation when overall task difficulty is controlled. The presence of noise as well as deficits related to age and/or hearing loss appear to limit the availability of cognitive processing resources available for working memory during conversation in difficult listening environments. The conversation environments that impact these resources appear to differ depending on age and/or hearing status.
Collapse
Affiliation(s)
- Brittney L. Carter
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| | - Frédéric Apoux
- Department of Otolaryngology—Head & Neck Surgery, The Ohio State University, Columbus
| | - Eric W. Healy
- Department of Speech and Hearing Science, The Ohio State University, Columbus
| |
Collapse
|
23
|
Impact of Effortful Word Recognition on Supportive Neural Systems Measured by Alpha and Theta Power. Ear Hear 2022; 43:1549-1562. [DOI: 10.1097/aud.0000000000001211] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
24
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
25
|
Cowan T, Paroby C, Leibold LJ, Buss E, Rodriguez B, Calandruccio L. Masked-Speech Recognition for Linguistically Diverse Populations: A Focused Review and Suggestions for the Future. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3195-3216. [PMID: 35917458 PMCID: PMC9911100 DOI: 10.1044/2022_jslhr-22-00011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/12/2022] [Accepted: 05/04/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Twenty years ago, von Hapsburg and Peña (2002) wrote a tutorial that reviewed the literature on speech audiometry and bilingualism and outlined valuable recommendations to increase the rigor of the evidence base. This review article returns to that seminal tutorial to reflect on how that advice was applied over the last 20 years and to provide updated recommendations for future inquiry. METHOD We conducted a focused review of the literature on masked-speech recognition for bilingual children and adults. First, we evaluated how studies published since 2002 described bilingual participants. Second, we reviewed the literature on native language masked-speech recognition. Third, we discussed theoretically motivated experimental work. Fourth, we outlined how recent research in bilingual speech recognition can be used to improve clinical practice. RESULTS Research conducted since 2002 commonly describes bilingual samples in terms of their language status, competency, and history. Bilingualism was not consistently associated with poor masked-speech recognition. For example, bilinguals who were exposed to English prior to age 7 years and who were dominant in English performed comparably to monolinguals for masked-sentence recognition tasks. To the best of our knowledge, there are no data to document the masked-speech recognition ability of these bilinguals in their other language compared to a second monolingual group, which is an important next step. Nonetheless, individual factors that commonly vary within bilingual populations were associated with masked-speech recognition and included language dominance, competency, and age of acquisition. We identified methodological issues in sampling strategies that could, in part, be responsible for inconsistent findings between studies. For instance, disparities in socioeconomic status (SES) between recruited bilingual and monolingual groups could cause confounding bias within the research design. CONCLUSIONS Dimensions of the bilingual linguistic profile should be considered in clinical practice to inform counseling and (re)habilitation strategies since susceptibility to masking is elevated in at least one language for most bilinguals. Future research should continue to report language status, competency, and history but should also report language stability and demand for use data. In addition, potential confounds (e.g., SES, educational attainment) when making group comparisons between monolinguals and bilinguals must be considered.
Collapse
Affiliation(s)
- Tiana Cowan
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Caroline Paroby
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
26
|
Feldman A, Patou F, Baumann M, Stockmarr A, Waldemar G, Maier AM, Vogel A. Listen Carefully protocol: an exploratory case-control study of the association between listening effort and cognitive function. BMJ Open 2022; 12:e051109. [PMID: 35264340 PMCID: PMC8915370 DOI: 10.1136/bmjopen-2021-051109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
INTRODUCTION A growing body of evidence suggests that hearing loss is a significant and potentially modifiable risk factor for cognitive impairment. Although the mechanisms underlying the associations between cognitive decline and hearing loss are unclear, listening effort has been posited as one of the mechanisms involved with cognitive decline in older age. To date, there has been a lack of research investigating this association, particularly among adults with mild cognitive impairment (MCI). METHODS AND ANALYSIS 15-25 cognitively healthy participants and 15-25 patients with MCI (age 40-85 years) will be recruited to participate in an exploratory study investigating the association between cognitive functioning and listening effort. Both behavioural and objective measures of listening effort will be investigated. The sentence-final word identification and recall (SWIR) test will be administered with single talker non-intelligible speech background noise while monitoring pupil dilation. Evaluation of cognitive function will be carried out in a clinical setting using a battery of neuropsychological tests. This study is considered exploratory and proof of concept, with information taken to help decide the validity of larger-scale trials. ETHICS AND DISSEMINATION Written approval exemption was obtained by the Scientific Ethics Committee in the central region of Denmark (De Videnskabsetiske Komiteer i Region Hovedstaden), reference 19042404, and the project is registered pre-results at clinicaltrials.gov, reference NCT04593290, Protocol ID 19042404. Study results will be disseminated in peer-reviewed journals and conferences.
Collapse
Affiliation(s)
- Alix Feldman
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
| | - François Patou
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
- Research and Technology Group, Oticon Medical, Smørum, Denmark
| | - Monika Baumann
- Centre for Applied Audiology Research, Oticon, Smørum, Denmark
| | - Anders Stockmarr
- Statistics and Data Analysis, Department of Mathematics, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Gunhild Waldemar
- Danish Dementia Research Centre, Department of Neurology, Rigshospitalet, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Anja M Maier
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
- Department of Design, Manufacturing and Engineering Management, Faculty of Engineering, University of Strathclyde, Glasgow, UK
| | - Asmus Vogel
- Danish Dementia Research Centre, Department of Neurology, Rigshospitalet, Copenhagen, Denmark
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
27
|
Bsharat-Maalouf D, Karawani H. Learning and bilingualism in challenging listening conditions: How challenging can it be? Cognition 2022; 222:105018. [PMID: 35032867 DOI: 10.1016/j.cognition.2022.105018] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Revised: 12/14/2021] [Accepted: 01/05/2022] [Indexed: 11/19/2022]
Abstract
When speech is presented in their second language (L2), bilinguals have more difficulties with speech perception in noise than monolinguals do. However, how noise affects speech perception of bilinguals in their first language (L1) is still unclear. In addition, it is not clear whether bilinguals' speech perception in challenging listening conditions is specific to the type of degradation, or whether there is a shared mechanism for bilingual speech processing under complex listening conditions. Therefore, the current study examined the speech perception of 60 Arabic-Hebrew bilinguals and a control group of native Hebrew speakers during degraded (speech in noise, vocoded speech) and quiet listening conditions. Between participant comparisons (comparing native Hebrew speakers and bilinguals' perceptual performance in L1) and within participant comparisons (perceptual performance of bilinguals in L1 and L2) were conducted. The findings showed that bilinguals in L1 had more difficulty in noisy conditions than their control counterparts did, even when performed like controls under favorable listening conditions. However, bilingualism did not hinder language learning mechanisms. Bilinguals in L1 outperformed native Hebrew speakers in the perception of vocoded speech, demonstrating more extended learning processes. Bilinguals' perceptual performance in L1 versus L2 varied by task complexity. Correlation analyses revealed that bilinguals who coped better with noise degradation were more successful in perceiving the vocoding distortion. Together, these results provide insights into the mechanisms that contribute to speech perceptual performance in challenging listening conditions and suggest that bilinguals' language proficiency and age of language acquisition are not the only factors that affect performance. Rather, duration of exposure to languages, co-activation, and the ability to benefit from exposure to novel stimuli appear to affect the perceptual performance of bilinguals, even when operating in their dominant language. Our findings suggest that bilinguals use a shared mechanism for speech processing under challenging listening conditions.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel.
| |
Collapse
|
28
|
Dingemanse G, Goedegebure A. Listening Effort in Cochlear Implant Users: The Effect of Speech Intelligibility, Noise Reduction Processing, and Working Memory Capacity on the Pupil Dilation Response. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:392-404. [PMID: 34898265 DOI: 10.1044/2021_jslhr-21-00230] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE This study aimed to evaluate the effect of speech recognition performance, working memory capacity (WMC), and a noise reduction algorithm (NRA) on listening effort as measured with pupillometry in cochlear implant (CI) users while listening to speech in noise. METHOD Speech recognition and pupil responses (peak dilation, peak latency, and release of dilation) were measured during a speech recognition task at three speech-to-noise ratios (SNRs) with an NRA in both on and off conditions. WMC was measured with a reading span task. Twenty experienced CI users participated in this study. RESULTS With increasing SNR and speech recognition performance, (a) the peak pupil dilation decreased by only a small amount, (b) the peak latency decreased, and (c) the release of dilation after the sentences increased. The NRA had no effect on speech recognition in noise or on the peak or latency values of the pupil response but caused less release of dilation after the end of the sentences. A lower reading span score was associated with higher peak pupil dilation but was not associated with peak latency, release of dilation, or speech recognition in noise. CONCLUSIONS In CI users, speech perception is effortful, even at higher speech recognition scores and high SNRs, indicating that CI users are in a chronic state of increased effort in communication situations. The application of a clinically used NRA did not improve speech perception, nor did it reduce listening effort. Participants with a relatively low WMC exerted relatively more listening effort but did not have better speech reception thresholds in noise.
Collapse
Affiliation(s)
- Gertjan Dingemanse
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| | - André Goedegebure
- Department of Otorhinolaryngology, Head and Neck Surgery, Erasmus University Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
29
|
Nittrouer S, Lowenstein JH. Beyond Recognition: Visual Contributions to Verbal Working Memory. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:253-273. [PMID: 34788554 PMCID: PMC9150746 DOI: 10.1044/2021_jslhr-21-00177] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 07/02/2021] [Accepted: 08/26/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE It is well recognized that adding the visual to the acoustic speech signal improves recognition when the acoustic signal is degraded, but how that visual signal affects postrecognition processes is not so well understood. This study was designed to further elucidate the relationships among auditory and visual codes in working memory, a postrecognition process. DESIGN In a main experiment, 80 young adults with normal hearing were tested using an immediate serial recall paradigm. Three types of signals were presented (unprocessed speech, vocoded speech, and environmental sounds) in three conditions (audio-only, audio-video with dynamic visual signals, and audio-picture with static visual signals). Three dependent measures were analyzed: (a) magnitude of the recency effect, (b) overall recall accuracy, and (c) response times, to assess cognitive effort. In a follow-up experiment, 30 young adults with normal hearing were tested largely using the same procedures, but with a slight change in order of stimulus presentation. RESULTS The main experiment produced three major findings: (a) unprocessed speech evoked a recency effect of consistent magnitude across conditions; vocoded speech evoked a recency effect of similar magnitude to unprocessed speech only with dynamic visual (lipread) signals; environmental sounds never showed a recency effect. (b) Dynamic and static visual signals enhanced overall recall accuracy to a similar extent, and this enhancement was greater for vocoded speech and environmental sounds than for unprocessed speech. (c) All visual signals reduced cognitive load, except for dynamic visual signals with environmental sounds. The follow-up experiment revealed that dynamic visual (lipread) signals exerted their effect on the vocoded stimuli by enhancing phonological quality. CONCLUSIONS Acoustic and visual signals can combine to enhance working memory operations, but the source of these effects differs for phonological and nonphonological signals. Nonetheless, visual information can support better postrecognition processes for patients with hearing loss.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
30
|
Lewis JH, Castellanos I, Moberly AC. The Impact of Neurocognitive Skills on Recognition of Spectrally Degraded Sentences. J Am Acad Audiol 2021; 32:528-536. [PMID: 34965599 DOI: 10.1055/s-0041-1732438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. PURPOSE The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. RESEARCH DESIGN Correlational study design. STUDY SAMPLE Twenty-one NH college students. DATA COLLECTION AND ANALYSIS Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. RESULTS Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. CONCLUSIONS Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.
Collapse
Affiliation(s)
- Jessica H Lewis
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio.,Department of Speech and Hearing Science; The Ohio State University, Columbus, Ohio
| | - Irina Castellanos
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Aaron C Moberly
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
31
|
Keerstock S, Smiljanic R. Reading aloud in clear speech reduces sentence recognition memory and recall for native and non-native talkers. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3387. [PMID: 34852619 DOI: 10.1121/10.0006732] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Accepted: 09/23/2021] [Indexed: 06/13/2023]
Abstract
Speaking style variation plays a role in how listeners remember speech. Compared to conversational sentences, clearly spoken sentences were better recalled and identified as previously heard by native and non-native listeners. The present study investigated whether speaking style variation also plays a role in how talkers remember speech that they produce. Although distinctive forms of production (e.g., singing, speaking loudly) can enhance memory, the cognitive and articulatory efforts required to plan and produce listener-oriented hyper-articulated clear speech could detrimentally affect encoding and subsequent retrieval. Native and non-native English talkers' memories for sentences that they read aloud in clear and conversational speaking styles were assessed through a sentence recognition memory task (experiment 1; N = 90) and a recall task (experiment 2; N = 75). The results showed enhanced recognition memory and recall for sentences read aloud conversationally rather than clearly for both talker groups. In line with the "effortfulness" hypothesis, producing clear speech may increase the processing load diverting resources from memory encoding. Implications for the relationship between speech perception and production are discussed.
Collapse
Affiliation(s)
- Sandie Keerstock
- Department of Psychological Sciences, University of Missouri, 124 Psychology Building, 200 South 7th Street, Columbia, Missouri 65211, USA
| | - Rajka Smiljanic
- Department of Linguistics, University of Texas at Austin, 305 East 23rd Street STOP B5100, Austin, Texas 78712, USA
| |
Collapse
|
32
|
DeRoy Milvae K, Kuchinsky SE, Stakhovskaya OA, Goupell MJ. Dichotic listening performance and effort as a function of spectral resolution and interaural symmetry. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:920. [PMID: 34470337 PMCID: PMC8346288 DOI: 10.1121/10.0005653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 06/30/2021] [Accepted: 06/30/2021] [Indexed: 06/13/2023]
Abstract
One potential benefit of bilateral cochlear implants is reduced listening effort in speech-on-speech masking situations. However, the symmetry of the input across ears, possibly related to spectral resolution, could impact binaural benefits. Fifteen young adults with normal hearing performed digit recall with target and interfering digits presented to separate ears and attention directed to the target ear. Recall accuracy and pupil size over time (used as an index of listening effort) were measured for unprocessed, 16-channel vocoded, and 4-channel vocoded digits. Recall accuracy was significantly lower for dichotic (with interfering digits) than for monotic listening. Dichotic recall accuracy was highest when the target was less degraded and the interferer was more degraded. With matched target and interferer spectral resolution, pupil dilation was lower with more degradation. Pupil dilation grew more shallowly over time when the interferer had more degradation. Overall, interferer spectral resolution more strongly affected listening effort than target spectral resolution. These results suggest that interfering speech both lowers performance and increases listening effort, and that the relative spectral resolution of target and interferer affect the listening experience. Ignoring a clearer interferer is more effortful.
Collapse
Affiliation(s)
- Kristina DeRoy Milvae
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Olga A Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
33
|
Johnson BW. Re-wiring the brain: Attention and cognition in the age of artificial hearing. Clin Neurophysiol 2021; 132:2257-2258. [PMID: 34238676 DOI: 10.1016/j.clinph.2021.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 06/09/2021] [Accepted: 06/10/2021] [Indexed: 11/29/2022]
|
34
|
Perkins E, Dietrich MS, Manzoor N, O'Malley M, Bennett M, Rivas A, Haynes D, Labadie R, Gifford R. Further Evidence for the Expansion of Adult Cochlear Implant Candidacy Criteria. Otol Neurotol 2021; 42:815-823. [PMID: 33606469 PMCID: PMC8627184 DOI: 10.1097/mao.0000000000003068] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
OBJECTIVE 1) To complete a follow-up investigation of postoperative outcomes for adult cochlear implant (CI) recipients scoring ≥30% Consonant-Nucleus-Consonant (CNC) preoperatively, and 2) to describe the postoperative performance trajectory for this group of higher performing patients. STUDY DESIGN Retrospective chart review. SETTING Tertiary referral center. PATIENTS One hundred four (105 ears) postlingually deafened adults who scored ≥30% CNC word recognition in the ear to be implanted preoperatively. INTERVENTIONS One hundred four subjects underwent cochlear implantation. MAIN OUTCOME MEASURES Pre- and postoperative CNC word scores and AzBio sentences in quiet and noise in the ear to be implanted as well as the bilateral-aided condition pre-CI and at 1, 3, 6, and 12 months post-CI. RESULTS Statistically significant improvement was demonstrated for CNC and AzBio sentences in quiet and noise for the CI alone and bilateral listening conditions. Most improvement was demonstrated by 6-months postoperatively (p < 0.001) with the exception of AzBio sentences in noise demonstrating improvement within 3 months (p < 0.001). For patients with preop CNC scores up to 40% (n = 57), all recipients demonstrated either equivocal (n = 17) or statistically significant improvement (n = 40) for CNC word recognition in the CI-alone condition and none demonstrated a significant decrement in the bilateral condition. For patients with preop CNC scores >40% (n = 47, 48 ears), 89.3% (42 patients) demonstrated either equivocal (n = 24, 50%) or statistically significant improvement (n = 19, 39.6%) for CNC word recognition in the CI-only condition and none demonstrated a significant decrement in the bilateral condition. CONCLUSIONS CI candidates with preoperative CNC word scores higher than conventional CI recipients derive statistically significant benefit from cochlear implantation for both the CI ear and best-aided condition. These data provide further support for the expansion of adult CI candidacy up to at least 40% CNC word recognition preoperatively with consideration given to further expansion possibly up to 60%.
Collapse
Affiliation(s)
- Elizabeth Perkins
- Department of Otolaryngology/Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | | | | | | | | | | | | | | | | |
Collapse
|
35
|
Lunner T, Alickovic E, Graversen C, Ng EHN, Wendt D, Keidser G. Three New Outcome Measures That Tap Into Cognitive Processes Required for Real-Life Communication. Ear Hear 2021; 41 Suppl 1:39S-47S. [PMID: 33105258 PMCID: PMC7676869 DOI: 10.1097/aud.0000000000000941] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Accepted: 07/11/2020] [Indexed: 11/29/2022]
Abstract
To increase the ecological validity of outcomes from laboratory evaluations of hearing and hearing devices, it is desirable to introduce more realistic outcome measures in the laboratory. This article presents and discusses three outcome measures that have been designed to go beyond traditional speech-in-noise measures to better reflect realistic everyday challenges. The outcome measures reviewed are: the Sentence-final Word Identification and Recall (SWIR) test that measures working memory performance while listening to speech in noise at ceiling performance; a neural tracking method that produces a quantitative measure of selective speech attention in noise; and pupillometry that measures changes in pupil dilation to assess listening effort while listening to speech in noise. According to evaluation data, the SWIR test provides a sensitive measure in situations where speech perception performance might be unaffected. Similarly, pupil dilation has also shown sensitivity in situations where traditional speech-in-noise measures are insensitive. Changes in working memory capacity and effort mobilization were found at positive signal-to-noise ratios (SNR), that is, at SNRs that might reflect everyday situations. Using stimulus reconstruction, it has been demonstrated that neural tracking is a robust method at determining to what degree a listener is attending to a specific talker in a typical cocktail party situation. Using both established and commercially available noise reduction schemes, data have further shown that all three measures are sensitive to variation in SNR. In summary, the new outcome measures seem suitable for testing hearing and hearing devices under more realistic and demanding everyday conditions than traditional speech-in-noise tests.
Collapse
Affiliation(s)
- Thomas Lunner
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
- Department of Electrical Engineering, Division Automatic Control, Linköping University, Linköping, Sweden
- Department of Health Technology, Hearing Systems, Technical University of Denmark, Lyngby, Denmark
| | - Emina Alickovic
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Electrical Engineering, Division Automatic Control, Linköping University, Linköping, Sweden
| | | | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
- Oticon A/S, Kongebakken, Denmark
| | - Dorothea Wendt
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Health Technology, Hearing Systems, Technical University of Denmark, Lyngby, Denmark
| | - Gitte Keidser
- Eriksholm Research Centre, Oticon A/S, Snekkersten, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| |
Collapse
|
36
|
Hülsmeier D, Buhl M, Wardenga N, Warzybok A, Schädler MR, Kollmeier B. Inference of the distortion component of hearing impairment from speech recognition by predicting the effect of the attenuation component. Int J Audiol 2021; 61:205-219. [PMID: 34081564 DOI: 10.1080/14992027.2021.1929515] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
OBJECTIVE A model-based determination of the average supra-threshold ("distortion") component of hearing impairment which limits the benefit of hearing aid amplification. DESIGN Published speech recognition thresholds (SRTs) were predicted with the framework for auditory discrimination experiments (FADE), which simulates recognition processes, the speech intelligibility index (SII), which exploits frequency-dependent signal-to-noise ratios (SNR), and a modified SII with a hearing-loss-dependent band importance function (PAV). Their attenuation-component-based prediction errors were interpreted as estimates of the distortion component. STUDY SAMPLE Unaided SRTs of 315 hearing-impaired ears measured with the German matrix sentence test in stationary noise. RESULTS Overall, the models showed root-mean-square errors (RMSEs) of 7 dB, but for steeply sloping hearing loss FADE and PAV were more accurate (RMSE = 9 dB) than the SII (RMSE = 23 dB). Prediction errors of FADE and PAV increased linearly with the average hearing loss. The consideration of the distortion component estimate significantly improved the accuracy of FADE's and PAV's predictions. CONCLUSIONS The supra-threshold distortion component-estimated by prediction errors of FADE and PAV-seems to increase with the average hearing loss. Accounting for a distortion component improves the model predictions and implies a need for effective compensation strategies for supra-threshold processing deficits with increasing audibility loss.
Collapse
Affiliation(s)
- David Hülsmeier
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Mareike Buhl
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Nina Wardenga
- Cluster of Excellence Hearing4all, Oldenburg, Germany.,Department of Otolaryngology, Hannover Medical School, Hannover, Germany
| | - Anna Warzybok
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Marc René Schädler
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| | - Birger Kollmeier
- Medical Physics, CvO University Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, Oldenburg, Germany
| |
Collapse
|
37
|
Xu J, Cox RM. Interactions between Cognition and Hearing Aid Compression Release Time: Effects of Linguistic Context of Speech Test Materials on Speech-in-Noise Performance. Audiol Res 2021; 11:129-149. [PMID: 33918202 PMCID: PMC8167752 DOI: 10.3390/audiolres11020013] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 02/22/2021] [Accepted: 03/30/2021] [Indexed: 11/28/2022] Open
Abstract
Recent research has established a connection between hearing aid (HA) users’ cognition and speech recognition performance with short and long compression release times (RT). Contradictive findings prevent researchers from using cognition to predict RT prescription. We hypothesized that the linguistic context of speech recognition test materials was one of the factors that accounted for the inconsistency. The present study was designed to examine the relationship between HA users’ cognition and their aided speech recognition performance with short and long RTs using materials with various linguistic contexts. Thirty-four older HA users’ cognitive abilities were quantified using a reading span test. They were fitted with behind-the-ear style HAs with adjustable RT settings. Three speech recognition tests were used: the word-in-noise (WIN) test, the American four alternative auditory feature (AFAAF) test, and the Bamford-Kowal-Bench speech-in-noise (BKB-SIN) test. The results showed that HA users with high cognitive abilities performed better on the AFAAF and the BKB-SIN than those with low cognitive abilities when using short RT. None of the speech recognition tests produced significantly different performance between the two RTs for either cognitive group. These findings did not support our hypothesis. The results suggest that cognition might not be important in prescribing RT.
Collapse
|
38
|
Pinkl J, Cash EK, Evans TC, Neijman T, Hamilton JW, Ferguson SD, Martinez JL, Rumley J, Hunter LL, Moore DR, Stewart HJ. Short-Term Pediatric Acclimatization to Adaptive Hearing Aid Technology. Am J Audiol 2021; 30:76-92. [PMID: 33351648 DOI: 10.1044/2020_aja-20-00073] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Purpose This exploratory study assessed the perceptual, cognitive, and academic learning effects of an adaptive, integrated, directionality, and noise reduction hearing aid program in pediatric users. Method Fifteen pediatric hearing aid users (6-12 years old) received new bilateral, individually fitted Oticon Opn hearing aids programmed with OpenSound Navigator (OSN) processing. Word recognition in noise, sentence repetition in quiet, nonword repetition, vocabulary learning, selective attention, executive function, memory, and reading and mathematical abilities were measured within 1 week of the initial hearing aid fitting and 2 months post fit. Caregivers completed questionnaires assessing their child's listening and communication abilities prior to study enrollment and after 2 months of using the study hearing aids. Results Caregiver reporting indicated significant improvements in speech and sound perception, spatial sound awareness, and the ability to participate in conversations. However, there was no positive change in performance in any of the measured skills. Mathematical scores significantly declined after 2 months. Conclusions OSN provided a perceived improvement in functional benefit, compared to their previous hearing aids, as reported by caregivers. However, there was no positive change in listening skills, cognition, and academic success after 2 months of using OSN. Findings may have been impacted by reporter bias, limited sample size, and a relatively short trial period. This study took place during the summer when participants were out of school, which may have influenced the decline in mathematical scores. The results support further exploration with age- and audiogram-matched controls, larger sample sizes, and longer test-retest intervals that correspond to the academic school year.
Collapse
Affiliation(s)
- Joseph Pinkl
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Erin K. Cash
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Neuroscience, College of Arts and Sciences, University of Cincinnati, OH
| | - Tommy C. Evans
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Timothy Neijman
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Jean W. Hamilton
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Sarah D. Ferguson
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Jasmin L. Martinez
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Johanne Rumley
- Oticon A/S, Kongebakken, Denmark
- Department of Nordic Studies and Linguistics, Faculty of Humanities, University of Copenhagen, Denmark
| | - Lisa L. Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - David R. Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Otolaryngology, College of Medicine, University of Cincinnati, OH
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
| | - Hannah J. Stewart
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Division of Psychology and Language Sciences, University College London, United Kingdom
| |
Collapse
|
39
|
Carolan PJ, Heinrich A, Munro KJ, Millman RE. Financial reward has differential effects on behavioural and self-report measures of listening effort. Int J Audiol 2021; 60:900-910. [PMID: 33630718 DOI: 10.1080/14992027.2021.1884907] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
OBJECTIVES To investigate the effects of listening demands and motivation on listening effort (LE) in a novel speech recognition task. DESIGN We manipulated listening demands and motivation using vocoded speech and financial reward, respectively, and measured task performance (correct response rate) and indices of LE (response times (RTs), subjective ratings of LE and likelihood of giving up). Effects of inter-individual differences in cognitive skills and personality on task performance and LE were also assessed within the context of the Cognitive Energetics Theory (CET). STUDY SAMPLE Twenty-four participants with normal-hearing (age range: 19 - 33 years, 6 male). RESULTS High listening demands decreased the correct response rate and increased RTs, self-rated LE and self-rated likelihood of giving up. High financial reward increased subjective LE ratings only. Mixed-effects modelling showed small fixed effects for competitiveness on LE measured using RTs. Small fixed effects were found for cognitive skills (lexical decision RTs and backwards digit span) on LE measured using RTs and correct response rate, respectively. CONCLUSIONS The effects of listening demands on LE in the speech recognition task aligned with CET, whereas predictions regarding the influence of motivation, cognitive skills and personality were only partially supported.
Collapse
Affiliation(s)
- Peter J Carolan
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK.,NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | - Antje Heinrich
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK.,NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | - Kevin J Munro
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK.,NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | - Rebecca E Millman
- Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK.,NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
40
|
Koohi N, Thomas-Black G, Giunti P, Bamiou DE. Auditory Phenotypic Variability in Friedreich's Ataxia Patients. THE CEREBELLUM 2021; 20:497-508. [PMID: 33599954 PMCID: PMC8360871 DOI: 10.1007/s12311-021-01236-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Accepted: 01/25/2021] [Indexed: 11/28/2022]
Abstract
Auditory neural impairment is a key clinical feature of Friedreich’s Ataxia (FRDA). We aimed to characterize the phenotypical spectrum of the auditory impairment in FRDA in order to facilitate early identification and timely management of auditory impairment in FRDA patients and to explore the relationship between the severity of auditory impairment with genetic variables (the expansion size of GAA trinucleotide repeats, GAA1 and GAA2), when controlled for variables such as disease duration, severity of the disease and cognitive status. Twenty-seven patients with genetically confirmed FRDA underwent baseline audiological assessment (pure-tone audiometry, otoacoustic emissions, auditory brainstem response). Twenty of these patients had additional psychophysical auditory processing evaluation including an auditory temporal processing test (gaps in noise test) and a binaural speech perception test that assesses spatial processing (Listening in Spatialized Noise-Sentences Test). Auditory spatial and auditory temporal processing ability were significantly associated with the repeat length of GAA1. Patients with GAA1 greater than 500 repeats had more severe auditory temporal and spatial processing deficits, leading to poorer speech perception. Furthermore, the spatial processing ability was strongly correlated with the Montreal Cognitive Assessment (MoCA) score. To our knowledge, this is the first study to demonstrate an association between genotype and auditory spatial processing phenotype in patients with FRDA. Auditory temporal processing, neural sound conduction, spatial processing and speech perception were more severely affected in patients with GAA1 greater than 500 repeats. The results of our study may indicate that auditory deprivation plays a role in the development of mild cognitive impairment in FRDA patients.
Collapse
Affiliation(s)
- Nehzat Koohi
- The Ear Institute, University College London, London, WC1X 8EE, UK. .,Neuro-otology Department, University College London Hospitals, London, WC1E 6DG, UK. .,Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London, WC1N 3BG, UK.
| | - Gilbert Thomas-Black
- Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London, WC1N 3BG, UK.,Ataxia Centre, National Hospital for Neurology and Neurosurgery, University College London Hospitals, London, WC1N 3BG, UK
| | - Paola Giunti
- Department of Clinical and Movement Neurosciences, Institute of Neurology, University College London, London, WC1N 3BG, UK. .,Ataxia Centre, National Hospital for Neurology and Neurosurgery, University College London Hospitals, London, WC1N 3BG, UK.
| | - Doris-Eva Bamiou
- The Ear Institute, University College London, London, WC1X 8EE, UK. .,Neuro-otology Department, University College London Hospitals, London, WC1E 6DG, UK. .,Biomedical Research Centre, National Institute for Health Research, London, WC1E 6DG, UK.
| |
Collapse
|
41
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
42
|
Icht M, Mama Y, Taitelbaum-Swead R. Visual and Auditory Verbal Memory in Older Adults: Comparing Postlingually Deaf Cochlear Implant Users to Normal-Hearing Controls. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3865-3876. [PMID: 33049151 DOI: 10.1044/2020_jslhr-20-00170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of this study was to test whether a group of older postlingually deafened cochlear implant users (OCIs) use similar verbal memory strategies to those used by older normal-hearing adults (ONHs). Verbal memory functioning was assessed in the visual and auditory modalities separately, enabling us to eliminate possible modality-based biases. Method Participants performed two separate visual and auditory verbal memory tasks. In each task, the visually or aurally presented study words were learned by vocal production (saying aloud) or by no production (reading silently or listening), followed by a free recall test. Twenty-seven older adults (> 60 years) participated (OCI = 13, ONH = 14), all of whom demonstrated intact cognitive abilities. All OCIs showed good open-set speech perception results in quiet. Results Both ONHs and OCIs showed production benefits (higher recall rates for vocalized than nonvocalized words) in the visual and auditory tasks. The ONHs showed similar production benefits in the visual and auditory tasks. The OCIs demonstrated a smaller production effect in the auditory task. Conclusions These results may indicate that different modality-specific memory strategies were used by the ONHs and the OCIs. The group differences in memory performance suggest that, even when deafness occurs after the completion of language acquisition, the reduced and distorted external auditory stimulation leads to a deterioration in the phonological representation of sounds. Possibly, this deterioration leads to a less efficient auditory long-term verbal memory.
Collapse
Affiliation(s)
- Michal Icht
- Department of Communication Disorders, Ariel University, Israel
| | - Yaniv Mama
- Department of Behavioral Sciences and Psychology, Ariel University, Israel
| | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
43
|
Helfer KS, Jesse A. Hearing and speech processing in midlife. Hear Res 2020; 402:108097. [PMID: 33706999 DOI: 10.1016/j.heares.2020.108097] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2020] [Revised: 09/29/2020] [Accepted: 10/13/2020] [Indexed: 12/20/2022]
Abstract
Middle-aged adults often report a decline in their ability to understand speech in adverse listening situations. However, there has been relatively little research devoted to identifying how early aging affects speech processing, as the majority of investigations into senescent changes in speech understanding compare performance in groups of young and older adults. This paper provides an overview of research on hearing and speech perception in middle-aged adults. Topics covered include both objective and subjective (self-perceived) hearing and speech understanding, listening effort, and audiovisual speech perception. This review ends with justification for future research needed to define the nature, consequences, and remediation of hearing problems in middle-aged adults.
Collapse
Affiliation(s)
- Karen S Helfer
- Department of Communication Disorders, University of Massachusetts Amherst, 358 N. Pleasant St., Amherst, MA 01003, USA.
| | - Alexandra Jesse
- Department of Psychological and Brain Sciences, University of Massachusetts Amherst, 135 Hicks Way, Amherst, MA 01003, USA.
| |
Collapse
|
44
|
Brännström KJ, Lyberg-Åhlander V, Sahlén B. Perceived listening effort in children with hearing loss: listening to a dysphonic voice in quiet and in noise. LOGOP PHONIATR VOCO 2020; 47:1-9. [PMID: 32696707 DOI: 10.1080/14015439.2020.1794030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AIM The present study investigates the effect of signal degradation on perceived listening effort in children with hearing loss listening in a simulated class-room context. It also examines the associations between perceived listening effort, passage comprehension performance and executive functioning. METHODS Twenty-four children (aged 06:03-13:00 years) with hearing impairment using cochlear implant (CI) and/or hearing aids (HA) participated. The children made ratings of perceived listening effort after completing an auditory passage comprehension task. All children performed the task in four different listening conditions: listening to a typical (i.e. normal) voice in quiet, to a dysphonic voice in quiet, to a typical voice in background noise and to a dysphonic voice in background noise. In addition, the children completed a task assessing executive function. RESULTS Both voice quality and background noise increased perceived listening effort in children with CI/HA, but no interaction with executive function was seen. CONCLUSION Since increased listening effort seems to be a consequence of increased cognitive resource spending, it is likely that less resources will be available for these children not only to comprehend but also to learn in challenging listening environments such as classrooms.
Collapse
Affiliation(s)
- K Jonas Brännström
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden
| | - Viveka Lyberg-Åhlander
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden.,Speech Language Pathology, Faculty of Arts, Psychology and Theology, Åbo Akademi University, Turku, Finland
| | - Birgitta Sahlén
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden
| |
Collapse
|
45
|
Oosthuizen I, Picou EM, Pottas L, Myburgh HC, Swanepoel DW. Listening Effort in Native and Nonnative English-Speaking Children Using Low Linguistic Single- and Dual-Task Paradigms. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1979-1989. [PMID: 32479740 DOI: 10.1044/2020_jslhr-19-00330] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose It is not clear if behavioral indices of listening effort are sensitive to changes in signal-to-noise ratio (SNR) for young children (7-12 years old) from multilingual backgrounds. The purpose of this study was to explore the effects of SNR on listening effort in multilingual school-aged children (native English, nonnative English) as measured with a single- and a dual-task paradigm with low-linguistic speech stimuli (digits). The study also aimed to explore age effects on digit triplet recognition and response times (RTs). Method Sixty children with normal hearing participated, 30 per language group. Participants completed single and dual tasks in three SNRs (quiet, -10 dB, and -15 dB). Speech stimuli for both tasks were digit triplets. Verbal RTs were the listening effort measure during the single-task paradigm. A visual monitoring task was the secondary task during the dual-task paradigm. Results Significant effects of SNR on RTs were evident during both single- and dual-task paradigms. As expected, language background did not affect the pattern of RTs. The data also demonstrate a maturation effect for triplet recognition during both tasks and for RTs during the dual-task only. Conclusions Both single- and dual-task paradigms were sensitive to changes in SNR for school-aged children between 7 and 12 years of age. Language background (English as native language vs. English as nonnative language) had no significant effect on triplet recognition or RTs, demonstrating practical utility of low-linguistic stimuli for testing children from multilingual backgrounds.
Collapse
Affiliation(s)
- Ilze Oosthuizen
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Lidia Pottas
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
| | | | - De Wet Swanepoel
- Department of Speech-Language Pathology and Audiology, University of Pretoria, South Africa
- Ear Science Institute Australia, Subiaco
| |
Collapse
|
46
|
Pals C, Sarampalis A, Beynon A, Stainsby T, Başkent D. Effect of Spectral Channels on Speech Recognition, Comprehension, and Listening Effort in Cochlear-Implant Users. Trends Hear 2020; 24:2331216520904617. [PMID: 32189585 PMCID: PMC7082863 DOI: 10.1177/2331216520904617] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In favorable listening conditions, cochlear-implant (CI) users can reach high
speech recognition scores with as little as seven active electrodes. Here, we
hypothesized that even when speech recognition is high, additional spectral
channels may still benefit other aspects of speech perception, such as
comprehension and listening effort. Twenty-five adult, postlingually deafened CI
users, selected from two Dutch implant centers for high clinical word
identification scores, participated in two experiments. Experimental conditions
were created by varying the number of active electrodes of the CIs between 7 and
15. In Experiment 1, response times (RTs) on the secondary task in a dual-task
paradigm were used as an indirect measure of listening effort, and in Experiment
2, sentence verification task (SVT) accuracy and RTs were used to measure speech
comprehension and listening effort, respectively. Speech recognition was near
ceiling for all conditions tested, as intended by the design. However, the
dual-task paradigm failed to show the hypothesized decrease in RTs with
increasing spectral channels. The SVT did show a systematic improvement in both
speech comprehension and response speed across all conditions. In conclusion,
the SVT revealed additional benefits in both speech comprehension and listening
effort for conditions in which high speech recognition was already achieved.
Hence, adding spectral channels may provide benefits for CI listeners that may
not be reflected by traditional speech tests. The SVT is a relatively simple
task that is easy to implement and may therefore be a good candidate for
identifying such additional benefits in research or clinical settings.
Collapse
Affiliation(s)
- Carina Pals
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | | | - Andy Beynon
- Department of Otorhinolaryngology, Head and Neck Surgery, Hearing and Implants, Radboud University Medical Centre, Nijmegen, the Netherlands
| | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
47
|
Di Berardino F, Cortinovis I, Gasbarre A, Filipponi E, Milani S, Zanetti D. Verbal task and motor responses (VTMR) in an adult hearing screening programme. ACTA OTORHINOLARYNGOLOGICA ITALICA 2020; 40:57-63. [PMID: 30933172 PMCID: PMC7147545 DOI: 10.14639/0392-100x-1929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 03/28/2018] [Indexed: 11/23/2022]
Abstract
The aim of this study was to test the efficacy of Verbal Tasks and Motor Responses (VTMR) speech audiometry in providing a rapid and true-to-life assessment of hearing-related problems as a single test in adult hearing screening programmes. The VTMR consists in manual execution of 5 verbal commands received by patients at different signal intensity levels and fixed masking noise; it provides a score of speech comprehension in noise. This was a prospective observational study in 916 individuals out of 1,300 volunteers (605 males, 695 females, aged 56 ± 17 years) who completed adult hearing screening. VTMR speech audiometry was performed at signal to noise (S/N) ratios of 0 dB and –10 dB. The difference between normal and hearing impaired subjects in terms of all the considered variables was statistically significant for pure-tone audiometry and VTMR testing. VTMR testing at a S/N ratio of –10 dB with a cut-off of four correctly executed tasks and was a rapid, feasible and efficient means of differentiating between normal and hearing impaired subjects. When used to screen hearing impaired subjects with participation restrictions, the sensitivity and specificity of the VTMR test rose to 90% and 62%, respectively. The VTMR test in noise could be used as a stand-alone tool when screening for impairment and self-perceived participation restriction together.
Collapse
|
48
|
Taitelbaum-Swead R, Kozol Z, Fostick L. Listening Effort Among Adults With and Without Attention-Deficit/Hyperactivity Disorder. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4554-4563. [PMID: 31747524 DOI: 10.1044/2019_jslhr-h-19-0134] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Few studies have assessed listening effort (LE)-the cognitive resources required to perceive speech-among populations with intact hearing but reduced availability of cognitive resources. Attention/deficit/hyperactivity disorder (ADHD) is theorized to restrict attention span, possibly making speech perception in adverse conditions more challenging. This study examined the effect of ADHD on LE among adults using a behavioral dual-task paradigm (DTP). Method Thirty-nine normal-hearing adults (aged 21-27 years) participated: 19 with ADHD (ADHD group) and 20 without ADHD (control group). Baseline group differences were measured in visual and auditory attention as well as speech perception. LE using DTP was assessed as the performance difference on a visual-motor task versus a simultaneous auditory and visual-motor task. Results Group differences in attention were confirmed by differences in visual attention (larger reaction times between congruent and incongruent conditions) and auditory attention (lower accuracy in the presence of distractors) among the ADHD group, compared to the controls. LE was greater among the ADHD group than the control group. Nevertheless, no group differences were found in speech perception. Conclusions LE is increased among those with ADHD. As a DTP assumes limited cognitive capacity to allocate attentional resources, LE among those with ADHD may be increased because higher level cognitive processes are more taxed in this population. Studies on LE using a DTP should take into consideration mechanisms of selective and divided attention. Among young adults who need to continuously process great volumes of auditory and visual information, much more effort may be expended by those with ADHD than those without it. As a result, those with ADHD may be more prone to fatigue and irritability, similar to those who are engaged in more outwardly demanding tasks.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Zvi Kozol
- Department of Physiotherapy, Ariel University, Israel
| | - Leah Fostick
- Department of Communication Disorders, Ariel University, Israel
| |
Collapse
|
49
|
Ng EHN, Rönnberg J. Hearing aid experience and background noise affect the robust relationship between working memory and speech recognition in noise. Int J Audiol 2019; 59:208-218. [PMID: 31809220 DOI: 10.1080/14992027.2019.1677951] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: The aim of this study was to examine how background noise and hearing aid experience affect the robust relationship between working memory and speech recognition.Design: Matrix sentences were used to measure speech recognition in noise. Three measures of working memory were administered. Study sample: 148 participants with at least 2 years of hearing aid experience.Results: A stronger overall correlation between working memory and speech recognition performance was found in a four-talker babble than in a stationary noise background. This correlation was significantly weaker in participants with most hearing aid experience than those with least experience when background noise was stationary. In the four-talker babble, however, no significant difference was found between the strength of correlations between users with different experience.Conclusion: In general, more explicit processing of working memory is invoked when listening in a multi-talker babble. The matching processes (cf. Ease of Language Understanding model, ELU) were more efficient for experienced than for less experienced users when perceiving speech. This study extends the existing ELU model that mismatch may also lead to the establishment of new phonological representations in the long-term memory.
Collapse
Affiliation(s)
- Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
50
|
Abstract
OBJECTIVES The present study investigated presentation modality differences in lexical encoding and working memory representations of spoken words of older, hearing-impaired adults. Two experiments were undertaken: a memory-scanning experiment and a stimulus gating experiment. The primary objective of experiment 1 was to determine whether memory encoding and retrieval and scanning speeds are different for easily identifiable words presented in auditory-visual (AV), auditory-only (AO), and visual-only (VO) modalities. The primary objective of experiment 2 was to determine if memory encoding and retrieval speed differences observed in experiment 1 could be attributed to the early availability of AV speech information compared with AO or VO conditions. DESIGN Twenty-six adults over age 60 years with bilateral mild to moderate sensorineural hearing loss participated in experiment 1, and 24 adults who took part in experiment 1 participated in experiment 2. An item recognition reaction-time paradigm (memory-scanning) was used in experiment 1 to measure (1) lexical encoding speed, that is, the speed at which an easily identifiable word was recognized and placed into working memory, and (2) retrieval speed, that is, the speed at which words were retrieved from memory and compared with similarly encoded words (memory scanning) presented in AV, AO, and VO modalities. Experiment 2 used a time-gated word identification task to test whether the time course of stimulus information available to participants predicted the modality-related memory encoding and retrieval speed results from experiment 1. RESULTS The results of experiment 1 revealed significant differences among the modalities with respect to both memory encoding and retrieval speed, with AV fastest and VO slowest. These differences motivated an examination of the time course of stimulus information available as a function of modality. Results from experiment 2 indicated the encoding and retrieval speed advantages for AV and AO words compared with VO words were mostly driven by the time course of stimulus information. The AV advantage seen in encoding and retrieval speeds is likely due to a combination of robust stimulus information available to the listener earlier in time and lower attentional demands compared with AO or VO encoding and retrieval. CONCLUSIONS Significant modality differences in lexical encoding and memory retrieval speeds were observed across modalities. The memory scanning speed advantage observed for AV compared with AO or VO modalities was strongly related to the time course of stimulus information. In contrast, lexical encoding and retrieval speeds for VO words could not be explained by the time-course of stimulus information alone. Working memory processes for the VO modality may be impacted by greater attentional demands and less information availability compared with the AV and AO modalities. Overall, these results support the hypothesis that the presentation modality for speech inputs (AV, AO, or VO) affects how older adult listeners with hearing loss encode, remember, and retrieve what they hear.
Collapse
|