1
|
Hood KE, Hurley LM. Listening to your partner: serotonin increases male responsiveness to female vocal signals in mice. Front Hum Neurosci 2024; 17:1304653. [PMID: 38328678 PMCID: PMC10847236 DOI: 10.3389/fnhum.2023.1304653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/28/2023] [Indexed: 02/09/2024] Open
Abstract
The context surrounding vocal communication can have a strong influence on how vocal signals are perceived. The serotonergic system is well-positioned for modulating the perception of communication signals according to context, because serotonergic neurons are responsive to social context, influence social behavior, and innervate auditory regions. Animals like lab mice can be excellent models for exploring how serotonin affects the primary neural systems involved in vocal perception, including within central auditory regions like the inferior colliculus (IC). Within the IC, serotonergic activity reflects not only the presence of a conspecific, but also the valence of a given social interaction. To assess whether serotonin can influence the perception of vocal signals in male mice, we manipulated serotonin systemically with an injection of its precursor 5-HTP, and locally in the IC with an infusion of fenfluramine, a serotonin reuptake blocker. Mice then participated in a behavioral assay in which males suppress their ultrasonic vocalizations (USVs) in response to the playback of female broadband vocalizations (BBVs), used in defensive aggression by females when interacting with males. Both 5-HTP and fenfluramine increased the suppression of USVs during BBV playback relative to controls. 5-HTP additionally decreased the baseline production of a specific type of USV and male investigation, but neither drug treatment strongly affected male digging or grooming. These findings show that serotonin modifies behavioral responses to vocal signals in mice, in part by acting in auditory brain regions, and suggest that mouse vocal behavior can serve as a useful model for exploring the mechanisms of context in human communication.
Collapse
Affiliation(s)
- Kayleigh E. Hood
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| | - Laura M. Hurley
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| |
Collapse
|
2
|
Yi H, Choudhury M, Hicks C. A Transparent Mask and Clear Speech Benefit Speech Intelligibility in Individuals With Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4558-4574. [PMID: 37788660 DOI: 10.1044/2023_jslhr-22-00636] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/05/2023]
Abstract
PURPOSE The purpose of the study is to investigate the impacts of a surgical mask and a transparent mask on audio-only and audiovisual speech intelligibility in noise (i.e., 0 dB signal-to-noise ratio) in individuals with mild-to-profound hearing loss. The study also examined if individuals with hearing loss can benefit from using a transparent mask and clear speech for speech understanding in noise. METHOD Thirty-one individuals with hearing loss (from 22 to 74 years old) completed keyword identification tasks to measure face-masked speech intelligibility in noise. A mixed-effects logistic regression model was used to examine the effects of face masks (no mask, transparent mask, surgical mask), presentation modes (audio only, audiovisual), speaking styles (conversational, clear), noise type (speech-shaped noise [SSN], four-talker babble [4-T babble]), hearing groups (mild hearing loss [MHL], greater than MHL: GHL), and their interactions on binary accuracy of keyword identification. RESULTS In the audio-only mode, the GHL group showed reduced speech intelligibility regardless of other factors, whereas the MHL group showed decreased speech intelligibility for the transparent mask more than for the surgical mask. The use of a transparent mask was advantageous for both hearing loss groups. Clear speech remediated the detrimental effects of face masks on speech intelligibility in noise. Both groups tended to perform better in SSN versus 4-T babble. CONCLUSIONS The findings indicate that, when using face masks, either a transparent mask or a surgical mask negatively affects speech understanding in noise for individuals with hearing loss. Using a transparent mask and clear speech could be a potential solution to improve speech intelligibility in communication with face masks in noise.
Collapse
Affiliation(s)
- Hoyoung Yi
- Department of Speech, Language, and Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock
| | - Moumita Choudhury
- Department of Speech, Language, and Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock
| | - Candace Hicks
- Department of Speech, Language, and Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock
| |
Collapse
|
3
|
Moradi S, Engdahl B, Johannessen A, Selbæk G, Aarhus L, Haanes GG. Hearing loss, hearing aid use, and subjective memory complaints: Results of the HUNT study in Norway. Front Neurol 2023; 13:1094270. [PMID: 36712418 PMCID: PMC9875071 DOI: 10.3389/fneur.2022.1094270] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 12/13/2022] [Indexed: 01/13/2023] Open
Abstract
Objective This study aimed to explore the association between hearing loss severity, hearing aid use, and subjective memory complaints in a large cross-sectional study in Norway. Methods Data were drawn from the fourth wave of the Trøndelag Health Study (HUNT4 Hearing, 2017-2019). The hearing threshold was defined as the pure-tone average of 0.5, 1, 2, and 4 kHz in the better ear. The participants were divided into five groups: normal hearing or slight/mild/moderate/severe hearing loss. Subjective self-reported short-term and long-term memory complaints were measured by the nine-item Meta-Memory Questionnaire (MMQ). The sample included 20,092 individuals (11,675 women, mean age 58.3 years) who completed both hearing and MMQ tasks. A multivariate analysis of variance (adjusted for covariates of age, sex, education, and health cofounders) was used to evaluate the association between hearing status and hearing aid use (in the hearing-impaired groups) and long-term and short-term subjective memory complaints. Results A multivariate analysis of variance, followed by univariate ANOVA and pairwise comparisons, showed that hearing loss was associated only with more long-term subjective memory complaints and not with short-term subjective memory complaints. In the hearing-impaired groups, the univariate main effect of hearing aid use was only observed for subjective long-term memory complaints and not for subjective short-term memory complaints. Similarly, the univariate interaction of hearing aid use and hearing status was significant for subjective long-term memory complaints and not for subjective short-term memory complaints. Pairwise comparisons, however, revealed no significant differences between hearing loss groups with respect to subjective long-term complaints. Conclusion This cross-sectional study indicates an association between hearing loss and subjective long-term memory complaints but not with subjective short-term memory complaints. In addition, an interaction between hearing status and hearing aid use for subjective long-term memory complaints was observed in hearing-impaired groups, which calls for future research to examine the effects of hearing aid use on different memory systems.
Collapse
Affiliation(s)
- Shahram Moradi
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, Porsgrunn, Norway,*Correspondence: Shahram Moradi ✉
| | - Bo Engdahl
- Department of Physical Health and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Aud Johannessen
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, Horten, Norway,Norwegian National Centre for Ageing and Health, Vestfold Hospital Trust, Tønsberg, Norway
| | - Geir Selbæk
- Norwegian National Centre for Ageing and Health, Vestfold Hospital Trust, Tønsberg, Norway,Faculty of Medicine, Institute of Clinical Medicine, University of Oslo, Oslo, Norway,Geriatric Department, Oslo University Hospital, Oslo, Norway
| | - Lisa Aarhus
- Department of Occupational Medicine and Epidemiology, National Institute of Occupational Health, Oslo, Norway,Medical Department, Diakonhjemmet Hospital, Oslo, Norway
| | - Gro Gade Haanes
- Department of Nursing and Health Sciences, Faculty of Health and Social Sciences, University of South-Eastern Norway, Horten, Norway
| |
Collapse
|
4
|
Homman L, Danielsson H, Rönnberg J. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model. Front Psychol 2023; 14:1015227. [PMID: 36936006 PMCID: PMC10020708 DOI: 10.3389/fpsyg.2023.1015227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Objective The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.
Collapse
Affiliation(s)
- Lina Homman
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
- *Correspondence: Lina Homman,
| | - Henrik Danielsson
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
5
|
Sandhya, Vinay, V M. Perception of Incongruent Audiovisual Speech: Distribution of Modality-Specific Responses. Am J Audiol 2021; 30:968-979. [PMID: 34499528 DOI: 10.1044/2021_aja-20-00213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE Multimodal sensory integration in audiovisual (AV) speech perception is a naturally occurring phenomenon. Modality-specific responses such as auditory left, auditory right, and visual responses to dichotic incongruent AV speech stimuli help in understanding AV speech processing through each input modality. It is observed that distribution of activity in the frontal motor areas involved in speech production has been shown to correlate with how subjects perceive the same syllable differently or perceive different syllables. This study investigated the distribution of modality-specific responses to dichotic incongruent AV speech stimuli by simultaneously presenting consonant-vowel (CV) syllables with different places of articulation to the participant's left and right ears and visually. DESIGN A dichotic experimental design was adopted. Six stop CV syllables /pa/, /ta/, /ka/, /ba/, /da/, and /ga/ were assembled to create dichotic incongruent AV speech material. Participants included 40 native speakers of Norwegian (20 women, M age = 22.6 years, SD = 2.43 years; 20 men, M age = 23.7 years, SD = 2.08 years). RESULTS Findings of this study showed that, under dichotic listening conditions, velar CV syllables resulted in the highest scores in the respective ears, and this might be explained by stimulus dominance of velar consonants, as shown in previous studies. However, this study, with dichotic auditory stimuli accompanied by an incongruent video segment, demonstrated that the presentation of a visually distinct video segment possibly draws attention to the video segment in some participants, thereby reducing the overall recognition of the dominant syllable. Furthermore, the findings here suggest the possibility of lesser response times to incongruent AV stimuli in females compared with males. CONCLUSION The identification of the left audio, right audio, and visual segments in dichotic incongruent AV stimuli depends on place of articulation, stimulus dominance, and voice onset time of the CV syllables.
Collapse
Affiliation(s)
- Sandhya
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Vinay
- Department of Neuromedicine and Movement Science, Faculty of Medicine and Health Sciences, Norwegian University of Science and Technology, Trondheim, Norway
| | - Manchaiah, V
- Department of Speech and Hearing Sciences, Lamar University, Beaumont, TX
| |
Collapse
|
6
|
Age-related hearing loss influences functional connectivity of auditory cortex for the McGurk illusion. Cortex 2020; 129:266-280. [PMID: 32535378 DOI: 10.1016/j.cortex.2020.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/30/2020] [Accepted: 04/09/2020] [Indexed: 01/23/2023]
Abstract
Age-related hearing loss affects hearing at high frequencies and is associated with difficulties in understanding speech. Increased audio-visual integration has recently been found in age-related hearing impairment, the brain mechanisms that contribute to this effect are however unclear. We used functional magnetic resonance imaging in elderly subjects with normal hearing and mild to moderate uncompensated hearing loss. Audio-visual integration was studied using the McGurk task. In this task, an illusionary fused percept can occur if incongruent auditory and visual syllables are presented. The paradigm included unisensory stimuli (auditory only, visual only), congruent audio-visual and incongruent (McGurk) audio-visual stimuli. An illusionary precept was reported in over 60% of incongruent trials. These McGurk illusion rates were equal in both groups of elderly subjects and correlated positively with speech-in-noise perception and daily listening effort. Normal-hearing participants showed an increased neural response in left pre- and postcentral gyri and right middle frontal gyrus for incongruent stimuli (McGurk) compared to congruent audio-visual stimuli. Activation patterns were however not different between groups. Task-modulated functional connectivity differed between groups showing increased connectivity from auditory cortex to visual, parietal and frontal areas in hard of hearing participants as compared to normal-hearing participants when comparing incongruent stimuli (McGurk) with congruent audio-visual stimuli. These results suggest that changes in functional connectivity of auditory cortex rather than activation strength during processing of audio-visual McGurk stimuli accompany age-related hearing loss.
Collapse
|
7
|
Moradi S, Lidestam B, Ning Ng EH, Danielsson H, Rönnberg J. Perceptual Doping: An Audiovisual Facilitation Effect on Auditory Speech Processing, From Phonetic Feature Extraction to Sentence Identification in Noise. Ear Hear 2019; 40:312-327. [PMID: 29870521 PMCID: PMC6400397 DOI: 10.1097/aud.0000000000000616] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2017] [Accepted: 04/15/2018] [Indexed: 11/25/2022]
Abstract
OBJECTIVE We have previously shown that the gain provided by prior audiovisual (AV) speech exposure for subsequent auditory (A) sentence identification in noise is relatively larger than that provided by prior A speech exposure. We have called this effect "perceptual doping." Specifically, prior AV speech processing dopes (recalibrates) the phonological and lexical maps in the mental lexicon, which facilitates subsequent phonological and lexical access in the A modality, separately from other learning and priming effects. In this article, we use data from the n200 study and aim to replicate and extend the perceptual doping effect using two different A and two different AV speech tasks and a larger sample than in our previous studies. DESIGN The participants were 200 hearing aid users with bilateral, symmetrical, mild-to-severe sensorineural hearing loss. There were four speech tasks in the n200 study that were presented in both A and AV modalities (gated consonants, gated vowels, vowel duration discrimination, and sentence identification in noise tasks). The modality order of speech presentation was counterbalanced across participants: half of the participants completed the A modality first and the AV modality second (A1-AV2), and the other half completed the AV modality and then the A modality (AV1-A2). Based on the perceptual doping hypothesis, which assumes that the gain of prior AV exposure will be relatively larger relative to that of prior A exposure for subsequent processing of speech stimuli, we predicted that the mean A scores in the AV1-A2 modality order would be better than the mean A scores in the A1-AV2 modality order. We therefore expected a significant difference in terms of the identification of A speech stimuli between the two modality orders (A1 versus A2). As prior A exposure provides a smaller gain than AV exposure, we also predicted that the difference in AV speech scores between the two modality orders (AV1 versus AV2) may not be statistically significantly different. RESULTS In the gated consonant and vowel tasks and the vowel duration discrimination task, there were significant differences in A performance of speech stimuli between the two modality orders. The participants' mean A performance was better in the AV1-A2 than in the A1-AV2 modality order (i.e., after AV processing). In terms of mean AV performance, no significant difference was observed between the two orders. In the sentence identification in noise task, a significant difference in the A identification of speech stimuli between the two orders was observed (A1 versus A2). In addition, a significant difference in the AV identification of speech stimuli between the two orders was also observed (AV1 versus AV2). This finding was most likely because of a procedural learning effect due to the greater complexity of the sentence materials or a combination of procedural learning and perceptual learning due to the presentation of sentential materials in noisy conditions. CONCLUSIONS The findings of the present study support the perceptual doping hypothesis, as prior AV relative to A speech exposure resulted in a larger gain for the subsequent processing of speech stimuli. For complex speech stimuli that were presented in degraded listening conditions, a procedural learning effect (or a combination of procedural learning and perceptual learning effects) also facilitated the identification of speech stimuli, irrespective of whether the prior modality was A or AV.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Elaine Hoi Ning Ng
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
- Oticon A/S, Smørum, Denmark
| | - Henrik Danielsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
8
|
Moberly AC, Vasil KJ, Ray C. Visual Reliance During Speech Recognition in Cochlear Implant Users and Candidates. J Am Acad Audiol 2019; 31:30-39. [PMID: 31210633 DOI: 10.3766/jaaa.18049] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Adults with cochlear implants (CIs) are believed to rely more heavily on visual cues during speech recognition tasks than their normal-hearing peers. However, the relationship between auditory and visual reliance during audiovisual (AV) speech recognition is unclear and may depend on an individual's auditory proficiency, duration of hearing loss (HL), age, and other factors. PURPOSE The primary purpose of this study was to examine whether visual reliance during AV speech recognition depends on auditory function for adult CI candidates (CICs) and adult experienced CI users (ECIs). STUDY SAMPLE Participants included 44 ECIs and 23 CICs. All participants were postlingually deafened and had met clinical candidacy requirements for cochlear implantation. DATA COLLECTION AND ANALYSIS Participants completed City University of New York sentence recognition testing. Three separate lists of twelve sentences each were presented: the first in the auditory-only (A-only) condition, the second in the visual-only (V-only) condition, and the third in combined AV fashion. Each participant's amount of "visual enhancement" (VE) and "auditory enhancement" (AE) were computed (i.e., the benefit to AV speech recognition of adding visual or auditory information, respectively, relative to what could potentially be gained). The relative reliance of VE versus AE was also computed as a VE/AE ratio. RESULTS VE/AE ratio was predicted inversely by A-only performance. Visual reliance was not significantly different between ECIs and CICs. Duration of HL and age did not account for additional variance in the VE/AE ratio. CONCLUSIONS A shift toward visual reliance may be driven by poor auditory performance in ECIs and CICs. The restoration of auditory input through a CI does not necessarily facilitate a shift back toward auditory reliance. Findings suggest that individual listeners with HL may rely on both auditory and visual information during AV speech recognition, to varying degrees based on their own performance and experience, to optimize communication performance in real-world listening situations.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Kara J Vasil
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Christin Ray
- Department of Otolaryngology - Head & Neck Surgery, The Ohio State University, Columbus, OH
| |
Collapse
|
9
|
Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247-261. [DOI: 10.1080/14992027.2018.1551631] [Citation(s) in RCA: 52] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023]
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Emil Holmer
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
10
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
11
|
Does hearing aid use affect audiovisual integration in mild hearing impairment? Exp Brain Res 2018; 236:1161-1179. [PMID: 29453491 DOI: 10.1007/s00221-018-5206-6] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2017] [Accepted: 02/11/2018] [Indexed: 10/18/2022]
Abstract
There is converging evidence for altered audiovisual integration abilities in hearing-impaired individuals and those with profound hearing loss who are provided with cochlear implants, compared to normal-hearing adults. Still, little is known on the effects of hearing aid use on audiovisual integration in mild hearing loss, although this constitutes one of the most prevalent conditions in the elderly and, yet, often remains untreated in its early stages. This study investigated differences in the strength of audiovisual integration between elderly hearing aid users and those with the same degree of mild hearing loss who were not using hearing aids, the non-users, by measuring their susceptibility to the sound-induced flash illusion. We also explored the corresponding window of integration by varying the stimulus onset asynchronies. To examine general group differences that are not attributable to specific hearing aid settings but rather reflect overall changes associated with habitual hearing aid use, the group of hearing aid users was tested unaided while individually controlling for audibility. We found greater audiovisual integration together with a wider window of integration in hearing aid users compared to their age-matched untreated peers. Signal detection analyses indicate that a change in perceptual sensitivity as well as in bias may underlie the observed effects. Our results and comparisons with other studies in normal-hearing older adults suggest that both mild hearing impairment and hearing aid use seem to affect audiovisual integration, possibly in the sense that hearing aid use may reverse the effects of hearing loss on audiovisual integration. We suggest that these findings may be particularly important for auditory rehabilitation and call for a longitudinal study.
Collapse
|
12
|
Moradi S, Wahlin A, Hällgren M, Rönnberg J, Lidestam B. The Efficacy of Short-term Gated Audiovisual Speech Training for Improving Auditory Sentence Identification in Noise in Elderly Hearing Aid Users. Front Psychol 2017; 8:368. [PMID: 28348542 PMCID: PMC5346541 DOI: 10.3389/fpsyg.2017.00368] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Accepted: 02/27/2017] [Indexed: 11/13/2022] Open
Abstract
This study aimed to examine the efficacy and maintenance of short-term (one-session) gated audiovisual speech training for improving auditory sentence identification in noise in experienced elderly hearing-aid users. Twenty-five hearing aid users (16 men and 9 women), with an average age of 70.8 years, were randomly divided into an experimental (audiovisual training, n = 14) and a control (auditory training, n = 11) group. Participants underwent gated speech identification tasks comprising Swedish consonants and words presented at 65 dB sound pressure level with a 0 dB signal-to-noise ratio (steady-state broadband noise), in audiovisual or auditory-only training conditions. The Hearing-in-Noise Test was employed to measure participants’ auditory sentence identification in noise before the training (pre-test), promptly after training (post-test), and 1 month after training (one-month follow-up). The results showed that audiovisual training improved auditory sentence identification in noise promptly after the training (post-test vs. pre-test scores); furthermore, this improvement was maintained 1 month after the training (one-month follow-up vs. pre-test scores). Such improvement was not observed in the control group, neither promptly after the training nor at the one-month follow-up. However, no significant between-groups difference nor an interaction between groups and session was observed. Conclusion: Audiovisual training may be considered in aural rehabilitation of hearing aid users to improve listening capabilities in noisy conditions. However, the lack of a significant between-groups effect (audiovisual vs. auditory) or an interaction between group and session calls for further research.
Collapse
Affiliation(s)
- Shahram Moradi
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Anna Wahlin
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology and Department of Clinical and Experimental Medicine, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| | - Björn Lidestam
- Department of Behavioral Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
13
|
Yu L, Rao A, Zhang Y, Burton PC, Rishiq D, Abrams H. Neuromodulatory Effects of Auditory Training and Hearing Aid Use on Audiovisual Speech Perception in Elderly Individuals. Front Aging Neurosci 2017; 9:30. [PMID: 28270763 PMCID: PMC5318380 DOI: 10.3389/fnagi.2017.00030] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2016] [Accepted: 02/06/2017] [Indexed: 11/18/2022] Open
Abstract
Although audiovisual (AV) training has been shown to improve overall speech perception in hearing-impaired listeners, there has been a lack of direct brain imaging data to help elucidate the neural networks and neural plasticity associated with hearing aid (HA) use and auditory training targeting speechreading. For this purpose, the current clinical case study reports functional magnetic resonance imaging (fMRI) data from two hearing-impaired patients who were first-time HA users. During the study period, both patients used HAs for 8 weeks; only one received a training program named ReadMyQuipsTM (RMQ) targeting speechreading during the second half of the study period for 4 weeks. Identical fMRI tests were administered at pre-fitting and at the end of the 8 weeks. Regions of interest (ROI) including auditory cortex and visual cortex for uni-sensory processing, and superior temporal sulcus (STS) for AV integration, were identified for each person through independent functional localizer task. The results showed experience-dependent changes involving ROIs of auditory cortex, STS and functional connectivity between uni-sensory ROIs and STS from pretest to posttest in both cases. These data provide initial evidence for the malleable experience-driven cortical functionality for AV speech perception in elderly hearing-impaired people and call for further studies with a much larger subject sample and systematic control to fill in the knowledge gap to understand brain plasticity associated with auditory rehabilitation in the aging population.
Collapse
Affiliation(s)
- Luodi Yu
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota Minneapolis, MN, USA
| | - Aparna Rao
- Department of Speech and Hearing Sciences, Arizona State University Tempe, AZ, USA
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota Minneapolis, MN, USA
| | - Philip C Burton
- Office of the Associate Dean for Research, College of Liberal Arts, University of Minnesota Minneapolis, MN, USA
| | - Dania Rishiq
- Department of Speech Pathology and Audiology, University of South Alabama Mobile, AL, USA
| | - Harvey Abrams
- Department of Speech Pathology and Audiology, University of South Alabama Mobile, AL, USA
| |
Collapse
|
14
|
Rönnberg J, Lunner T, Ng EHN, Lidestam B, Zekveld AA, Sörqvist P, Lyxell B, Träff U, Yumba W, Classon E, Hällgren M, Larsby B, Signoret C, Pichora-Fuller MK, Rudner M, Danielsson H, Stenfelt S. Hearing impairment, cognition and speech understanding: exploratory factor analyses of a comprehensive test battery for a group of hearing aid users, the n200 study. Int J Audiol 2016; 55:623-42. [PMID: 27589015 PMCID: PMC5044772 DOI: 10.1080/14992027.2016.1219775] [Citation(s) in RCA: 63] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2016] [Revised: 07/29/2016] [Accepted: 07/29/2016] [Indexed: 02/08/2023]
Abstract
OBJECTIVE The aims of the current n200 study were to assess the structural relations between three classes of test variables (i.e. HEARING, COGNITION and aided speech-in-noise OUTCOMES) and to describe the theoretical implications of these relations for the Ease of Language Understanding (ELU) model. STUDY SAMPLE Participants were 200 hard-of-hearing hearing-aid users, with a mean age of 60.8 years. Forty-three percent were females and the mean hearing threshold in the better ear was 37.4 dB HL. DESIGN LEVEL1 factor analyses extracted one factor per test and/or cognitive function based on a priori conceptualizations. The more abstract LEVEL 2 factor analyses were performed separately for the three classes of test variables. RESULTS The HEARING test variables resulted in two LEVEL 2 factors, which we labelled SENSITIVITY and TEMPORAL FINE STRUCTURE; the COGNITIVE variables in one COGNITION factor only, and OUTCOMES in two factors, NO CONTEXT and CONTEXT. COGNITION predicted the NO CONTEXT factor to a stronger extent than the CONTEXT outcome factor. TEMPORAL FINE STRUCTURE and SENSITIVITY were associated with COGNITION and all three contributed significantly and independently to especially the NO CONTEXT outcome scores (R(2) = 0.40). CONCLUSIONS All LEVEL 2 factors are important theoretically as well as for clinical assessment.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Thomas Lunner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
- Eriksholm Research Centre,
Oticon A/S, Rørtangvej 20, 3070 Snekkersten,
Denmark
| | - Elaine Hoi Ning Ng
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Björn Lidestam
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Adriana Agatha Zekveld
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Section Ear & Hearing, Dept. of Otolaryngology-Head and Neck Surgery and EMGO Institute, VU University Medical Center,
Amsterdam,
The Netherlands
| | - Patrik Sörqvist
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Building, Energy and Environmental Engineering, University of Gävle,
Gävle,
Sweden
| | - Björn Lyxell
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Ulf Träff
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
| | - Wycliffe Yumba
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Elisabet Classon
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Mathias Hällgren
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Birgitta Larsby
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| | - Carine Signoret
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - M. Kathleen Pichora-Fuller
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Psychology, University of Toronto,
Toronto,
Ontario,
Canada
- The Toronto Rehabilitation Institute, University Health Network,
Toronto,
Ontario,
Canada
- The Rotman Research Institute, Baycrest Hospital,
Toronto,
Ontario,
Canada
| | - Mary Rudner
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Henrik Danielsson
- Department of Behavioural Sciences and Learning, Linköping University,
Linköping,
Sweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
| | - Stefan Stenfelt
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University,
Linköping,
Sweden
- Department of Clinical and Experimental Medicine, Linköping University,
Linköping,
Sweden
| |
Collapse
|