1
|
Sitzman TJ, Baylis AL, Perry JL, Weidler EM, Temkit M, Ishman SL, Tse RW. Protocol for a Prospective Observational Study of Revision Palatoplasty Versus Pharyngoplasty for Treatment of Velopharyngeal Insufficiency Following Cleft Palate Repair. Cleft Palate Craniofac J 2024; 61:870-881. [PMID: 36562144 PMCID: PMC10287832 DOI: 10.1177/10556656221147159] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
Abstract
OBJECTIVE To present the design and methodology for an actively enrolling comparative effectiveness study of revision palatoplasty versus pharyngoplasty for the treatment of velopharyngeal insufficiency (VPI). DESIGN Prospective observational multicenter study. SETTING Twelve hospitals across the United States and Canada. PARTICIPANTS Individuals who are 3-23 years of age with a history of repaired cleft palate and a diagnosis of VPI, with a total enrollment target of 528 participants. INTERVENTIONS Revision palatoplasty and pharyngoplasty (either pharyngeal flap or sphincter pharyngoplasty), as selected for each participant by their treatment team. MAIN OUTCOME MEASURE(S) The primary outcome is resolution of hypernasality, defined as the absence of consistent hypernasality as determined by blinded perceptual assessment of a standard speech sample recorded twelve months after surgery. The secondary outcome is incidence of new onset obstructive sleep apnea. Statistical analyses will use propensity score matching to control for demographics, medical history, preoperative severity of hypernasality, and preoperative imaging findings. RESULTS Study recruitment began February 2021. As of September 2022, 148 participants are enrolled, and 78 have undergone VPI surgery. Enrollment is projected to continue into 2025. Collection of postoperative evaluations should be completed by the end of 2026, with dissemination of results soon thereafter. CONCLUSIONS Patients with VPI following cleft palate repair are being actively enrolled at sites across the US and Canada into a prospective observational study evaluating surgical outcomes. This study will be the largest and most comprehensive study of VPI surgery outcomes to date.
Collapse
Affiliation(s)
- Thomas J. Sitzman
- Division of Plastic Surgery, Phoenix Children’s Hospital, Phoenix, Arizona, USA
- Division of Plastic Surgery, Mayo Clinic Arizona, Scottsdale, Arizona, USA
| | - Adriane L. Baylis
- Department of Plastic and Reconstructive Surgery, Nationwide Children’s Hospital, Columbus, Ohio, USA
- Department of Plastic and Reconstructive Surgery and Department of Pediatrics, The Ohio State University College of Medicine, Columbus, Ohio, USA
- Department of Speech Language Hearing Sciences, The Ohio State University, Columbus, Ohio, USA
| | - Jamie L. Perry
- Department of Communication Sciences and Disorders East Carolina University, Greenville, North Carolina, USA
| | - Erica M. Weidler
- Division of Plastic Surgery, Phoenix Children’s Hospital, Phoenix, Arizona, USA
| | - M’hamed Temkit
- Department of Clinical Research, Phoenix Children’s Hospital, Phoenix, Arizona, USA
| | - Stacey L. Ishman
- Department of Otolaryngology-Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Raymond W. Tse
- Division of Craniofacial and Plastic Surgery, Department of Surgery, Seattle Children’s Hospital, Seattle, Washington, USA
- Division of Plastic Surgery, Department of Surgery, University of Washington, Seattle, Washington, USA
| |
Collapse
|
2
|
Schauwecker N, Patro A, Holder JT, Bennett ML, Perkins E, Moberly AC. Cochlear Implant Qualification in Noise Versus Quiet: Do Patients Demonstrate Similar Postoperative Benefits? Otolaryngol Head Neck Surg 2024; 170:1411-1420. [PMID: 38353294 DOI: 10.1002/ohn.677] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 12/19/2023] [Accepted: 12/30/2023] [Indexed: 04/30/2024]
Abstract
OBJECTIVE To assess patient factors, audiometric performance, and patient-reported outcomes in cochlear implant (CI) patients who would not have qualified with in-quiet testing alone. STUDY DESIGN Retrospective chart review. SETTING Tertiary referral center. METHODS Adult CI recipients implanted between 2012 and 2022 were identified. Patients with preoperative AzBio Quiet > 60% in the implanted ear, requiring multitalker babble to qualify, comprised the in-noise qualifying (NQ) group. NQ postoperative performance was compared with the in-quiet qualifying (QQ) group using CNC, AzBio Quiet, and AzBio +5 dB signal-to-noise ratio. Speech, Spatial and Qualities of Hearing Scale (SSQ), Cochlear Implant Quality of Life scale (CIQOL-10), and daily device usage were also compared between the groups. RESULTS The QQ group (n = 771) and NQ group (n = 67) were similar in age and hearing loss duration. NQ had higher average preoperative and postoperative speech recognition scores. A larger proportion of QQ saw significant improvement in CNC and AzBio Quiet scores in the CI-only listening condition (eg, CI-only AzBio Quiet: 88% QQ vs 51% NQ, P < .001). Improvement in CI-only AzBio +5 dB and in all open set testing in the best-aided binaural listening condition was similar between groups (eg, Binaural AzBio Quiet 73% QQ vs 59% NQ, P = .345). Postoperative SSQ ratings, CIQOL scores, and device usage were also equivalent between both groups. CONCLUSION Patients who require in-noise testing to meet CI candidacy demonstrate similar improvements in best-aided speech perception and patient-reported outcomes as in-QQ, supporting the use of in-noise testing to determine CI qualification for borderline CI candidates.
Collapse
Affiliation(s)
- Natalie Schauwecker
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Ankita Patro
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Jourdan T Holder
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Marc L Bennett
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Elizabeth Perkins
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Moberly AC, Pisoni DB, Tamati TN. Audiovisual Processing Skills Before Cochlear Implantation Predict Postoperative Speech Recognition in Adults. Ear Hear 2024; 45:617-625. [PMID: 38143302 PMCID: PMC11025067 DOI: 10.1097/aud.0000000000001450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2023]
Abstract
OBJECTIVES Adults with hearing loss (HL) demonstrate greater benefits of adding visual cues to auditory cues (i.e., "visual enhancement" [VE]) during recognition of speech presented in a combined audiovisual (AV) fashion when compared with normal-hearing peers. For patients with moderate-to-profound sensorineural HL who receive cochlear implants (CIs), it is unclear whether the restoration of audibility results in a decrease in the VE provided by visual cues during AV speech recognition. Moreover, it is unclear whether increased VE during the experience of HL before CI is beneficial or maladaptive to ultimate speech recognition abilities after implantation. It is conceivable that greater VE before implantation contributes to the enormous variability in speech recognition outcomes demonstrated among patients with CIs. This study took a longitudinal approach to test two hypotheses: (H1) Adult listeners with HL who receive CIs would demonstrate a decrease in VE after implantation; and (H2) The magnitude of pre-CI VE would predict post-CI auditory-only speech recognition abilities 6 months after implantation, with the direction of that relation supporting a beneficial, redundant, or maladaptive effect on outcomes. DESIGN Data were collected from 30 adults at two time points: immediately before CI surgery and 6 months after device activation. Pre-CI speech recognition performance was measured in auditory-only (A-only), visual-only, and combined AV fashion for City University of New York (CUNY) sentences. Scores of VE during AV sentence recognition were computed. At 6 months after CI activation, participants were again tested on CUNY sentence recognition in the same conditions as pre-CI. H1 was tested by comparing post- versus pre-CI VE scores. At 6 months of CI use, additional open-set speech recognition measures were also obtained in the A-only condition, including isolated words, words in meaningful AzBio sentences, and words in AzBio sentences in multitalker babble. To test H2, correlation analyses were performed to assess the relation between post-CI A-only speech recognition scores and pre-CI VE scores. RESULTS Inconsistent with H1, after CI, participants did not demonstrate a significant decrease in VE scores. Consistent with H2, preoperative VE scores positively predicted postoperative scores of A-only sentence recognition for both sentences in quiet and in babble (rho = 0.40 to 0.45, p < 0.05), supporting a beneficial effect of pre-CI VE on post-CI auditory outcomes. Pre-CI VE was not significantly related to post-CI isolated word recognition. The raw pre-CI CUNY AV scores also predicted post-CI A-only speech recognition scores to a similar degree as VE scores. CONCLUSIONS After implantation, CI users do not demonstrate a decrease in VE from before surgery. The degree of VE during AV speech recognition before CI positively predicts A-only sentence recognition outcomes after implantation, suggesting the potential value of AV testing of CI patients preoperatively to help predict and set expectations for postoperative outcomes.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| | - David B. Pisoni
- Department of Psychological and Brain Sciences, Indiana University, Bloomington, Indiana, USA
| | - Terrin N. Tamati
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
- University Medical Center Groningen, University of Groningen, Department of Otorhinolaryngology/Head and Neck Surgery, Groningen, The Netherlands
| |
Collapse
|
4
|
McGarrigle R, Knight S, Rakusen L, Mattys S. Mood shapes the impact of reward on perceived fatigue from listening. Q J Exp Psychol (Hove) 2024:17470218241242260. [PMID: 38485525 DOI: 10.1177/17470218241242260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/25/2024]
Abstract
Knowledge of the underlying mechanisms of effortful listening could help to reduce cases of social withdrawal and mitigate fatigue, especially in older adults. However, the relationship between transient effort and longer term fatigue is likely to be more complex than originally thought. Here, we manipulated the presence/absence of monetary reward to examine the role of motivation and mood state in governing changes in perceived effort and fatigue from listening. In an online study, 185 participants were randomly assigned to either a "reward" (n = 91) or "no-reward" (n = 94) group and completed a dichotic listening task along with a series of questionnaires assessing changes over time in perceived effort, mood, and fatigue. Effort ratings were higher overall in the reward group, yet fatigue ratings in that group showed a shallower linear increase over time. Mediation analysis revealed an indirect effect of reward on fatigue ratings via perceived mood state; reward induced a more positive mood state which was associated with reduced fatigue. These results suggest that: (1) listening conditions rated as more "effortful" may be less fatiguing if the effort is deemed worthwhile, and (2) alterations to one's mood state represent a potential mechanism by which fatigue may be elicited during unrewarding listening situations.
Collapse
Affiliation(s)
| | - Sarah Knight
- Department of Psychology, University of York, York, UK
| | | | - Sven Mattys
- Department of Psychology, University of York, York, UK
| |
Collapse
|
5
|
Lamounier P, Carasek N, Daher VB, Costa CC, Ramos HVL, Martins SDC, Borges ALDF, Oliveira LAT, Bahmad Jr F. Cochlear Implants after Meningitis and Otosclerosis: A Comparison between Cochlear Ossification and Speech Perception Tests. J Pers Med 2024; 14:428. [PMID: 38673055 PMCID: PMC11050886 DOI: 10.3390/jpm14040428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2024] [Revised: 04/08/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
(1) Background: Performance after Cochlear Implantation (CI) can vary depending on numerous factors. This study aims to investigate how meningitis or otosclerosis can influence CI performance. (2) Methods: Retrospective analysis of CI performance in patients with etiological diagnosis of meningitis or otosclerosis, comparing the etiologies and analyzing the image findings, along with electrode array insertion status and technique. (3) Results: Speech recognition in CI patients with otosclerosis improves faster than in patients with meningitis. Other features such as radiological findings, degree of cochlear ossification, surgical technique used and total or partial insertion of electrodes do not seem to be directly related to speech recognition test performance. (4) Conclusions: Patients should be warned that their postoperative results have a strong correlation with the disease that caused their hearing loss and that, in cases of meningitis, a longer duration of speech-language training may be necessary to reach satisfactory results.
Collapse
Affiliation(s)
- Pauliana Lamounier
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
| | - Natalia Carasek
- Department of Health Sciences, University of Brasilia, Brasilia 70910-900, Brazil; (N.C.); (L.A.T.O.)
| | - Valeria Barcelos Daher
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
| | - Claudiney Cândido Costa
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
| | - Hugo Valter Lisboa Ramos
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
| | - Sergio de Castro Martins
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
- Otorhinolaryngology Department, Universidade Estadual de Goiás (UEG), Itumbiara 75536-100, Brazil
| | - Alda Linhares de Freitas Borges
- Department of Otolaryngology, Center of Rehabilitation and Readaptation Dr Henrique Santillo (CRER), Goiania 74653-230, Brazil; (P.L.); (V.B.D.); (C.C.C.); (H.V.L.R.); (S.d.C.M.); (A.L.d.F.B.)
| | | | - Fayez Bahmad Jr
- Department of Health Sciences, University of Brasilia, Brasilia 70910-900, Brazil; (N.C.); (L.A.T.O.)
| |
Collapse
|
6
|
Costa LD, Vaucher AVDA, Costa MJ. The word-with-noise test: test-retest reliability in normal-hearing adults. Codas 2024; 36:e20230093. [PMID: 38597550 PMCID: PMC11042685 DOI: 10.1590/2317-1782/20232023093pt] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 07/06/2023] [Indexed: 04/11/2024] Open
Abstract
PURPOSE To investigate the reliability of the Word-with-Noise Test in a group of normal-hearing adults. METHODS Forty-five normal-hearing adult subjects participated in the research. The interval between the first and second assessment was 14 to 28 days, performed during the same time of the day and by the same evaluator. The comparison analysis between the test and the retest was performed considering the general result of the ears, totaling 90 ears evaluated. The inferential analysis included the comparison of the situations in the first and second assessment using the Wilcoxon Test, calculation, and interpretation of the Intraclass Correlation Index. RESULTS There was a statistically significant difference between the test and retest performances. The intraclass correlation coefficients obtained were indicative of good reliability (r=0.759; p<0.001) for the monosyllabic stimulus and moderate reliability (r=0.631; p<0.001) for the disyllabic stimulus. CONCLUSION The Word-with-Noise Test demonstrated satisfactory reliability for both the monosyllabic and disyllabic stimuli.
Collapse
|
7
|
Rance G, Tomlin D, Yiu EM, Zanin J. Remediation of Perceptual Deficits in Progressive Auditory Neuropathy: A Case Study. J Clin Med 2024; 13:2127. [PMID: 38610891 PMCID: PMC11012630 DOI: 10.3390/jcm13072127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Revised: 04/03/2024] [Accepted: 04/04/2024] [Indexed: 04/14/2024] Open
Abstract
BACKGROUND Auditory neuropathy (AN) is a hearing disorder that affects neural activity in the VIIIth cranial nerve and central auditory pathways. Progressive forms have been reported in a number of neurodegenerative diseases and may occur as a result of both the deafferentiation and desynchronisation of neuronal processes. The purpose of this study was to describe changes in auditory function over time in a patient with axonal neuropathy and to explore the effect of auditory intervention. METHODS We tracked auditory function in a child with progressive AN associated with Charcot-Marie-Tooth (Type 2C) disease, evaluating hearing levels, auditory-evoked potentials, and perceptual abilities over a 3-year period. Furthermore, we explored the effect of auditory intervention on everyday listening and neuroplastic development. RESULTS While sound detection thresholds remained constant throughout, both electrophysiologic and behavioural evidence suggested auditory neural degeneration over the course of the study. Auditory brainstem response amplitudes were reduced, and perception of auditory timing cues worsened over time. Functional hearing ability (speech perception in noise) also deteriorated through the first 1.5 years of study until the child was fitted with a "remote-microphone" listening device, which subsequently improved binaural processing and restored speech perception ability to normal levels. CONCLUSIONS Despite the deterioration of auditory neural function consistent with peripheral axonopathy, sustained experience with the remote-microphone listening system appeared to produce neuroplastic changes, which improved the patient's everyday listening ability-even when not wearing the device.
Collapse
Affiliation(s)
- Gary Rance
- Department of Audiology and Speech Pathology, The University of Melbourne, Carlton, VIC 3053, Australia; (D.T.); (J.Z.)
| | - Dani Tomlin
- Department of Audiology and Speech Pathology, The University of Melbourne, Carlton, VIC 3053, Australia; (D.T.); (J.Z.)
| | - Eppie M. Yiu
- Department of Neurology, Royal Children’s Hospital, Parkville, VIC 3052, Australia
- Neurosciences Research, Murdoch Children’s Research Institute, Parkville, VIC 3052, Australia
- Department of Paediatrics, The University of Melbourne, Parkville, VIC 3052, Australia
| | - Julien Zanin
- Department of Audiology and Speech Pathology, The University of Melbourne, Carlton, VIC 3053, Australia; (D.T.); (J.Z.)
| |
Collapse
|
8
|
Patro A, Lindquist NR, Holder JT, Freeman MH, Gifford RH, Tawfik KO, O’Malley MR, Bennett ML, Haynes DS, Perkins EL. Improved Postoperative Speech Recognition and Processor Use With Early Cochlear Implant Activation. Otol Neurotol 2024; 45:386-391. [PMID: 38437818 PMCID: PMC10939836 DOI: 10.1097/mao.0000000000004150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE To report speech recognition outcomes and processor use based on timing of cochlear implant (CI) activation. STUDY DESIGN Retrospective cohort. SETTING Tertiary referral center. PATIENTS A total of 604 adult CI recipients from October 2011 to March 2022, stratified by timing of CI activation (group 1: ≤10 d, n = 47; group 2: >10 d, n = 557). MAIN OUTCOME MEASURES Average daily processor use; Consonant-Nucleus-Consonant (CNC) and Arizona Biomedical (AzBio) in quiet at 1-, 3-, 6-, and 12-month visits; time to peak performance. RESULTS The groups did not differ in sex ( p = 0.887), age at CI ( p = 0.109), preoperative CNC ( p = 0.070), or preoperative AzBio in quiet ( p = 0.113). Group 1 had higher median daily processor use than group 2 at the 1-month visit (12.3 versus 10.7 h/d, p = 0.017), with no significant differences at 3, 6, and 12 months. The early activation group had superior median CNC performance at 3 months (56% versus 46%, p = 0.007) and 12 months (60% versus 52%, p = 0.044). Similarly, the early activation group had superior median AzBio in quiet performance at 3 months (72% versus 59%, p = 0.008) and 12 months (75% versus 68%, p = 0.049). Both groups were equivalent in time to peak performance for CNC and AzBio. Earlier CI activation was significantly correlated with higher average daily processor use at all follow-up intervals. CONCLUSION CI activation within 10 days of surgery is associated with increased early device usage and superior speech recognition at both early and late follow-up visits. Timing of activation and device usage are modifiable factors that can help optimize postoperative outcomes in the CI population.
Collapse
Affiliation(s)
- Ankita Patro
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Nathan R. Lindquist
- Department of Otolaryngology–Head and Neck Surgery, Baylor College of Medicine, Houston, Texas
| | - Jourdan T. Holder
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Michael H. Freeman
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Kareem O. Tawfik
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Matthew R. O’Malley
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Marc L. Bennett
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - David S. Haynes
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| | - Elizabeth L. Perkins
- Department of Otolaryngology–Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee
| |
Collapse
|
9
|
Rødvik AK, Torkildsen JVK, Wie OB, Tvete O, Skaug I, Silvola JT. Consonant and vowel confusions in well-performing adult cochlear implant users, measured with a nonsense syllable repetition test. Int J Audiol 2024; 63:260-268. [PMID: 36853200 DOI: 10.1080/14992027.2023.2177893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 02/01/2023] [Indexed: 03/01/2023]
Abstract
OBJECTIVE The study's objective was to identify consonant and vowel confusions in cochlear implant (CI) users, using a nonsense syllable repetition test. DESIGN In this cross-sectional study, participants repeated recorded mono- and bisyllabic nonsense words and real-word monosyllables in an open-set design. STUDY SAMPLE Twenty-eight Norwegian-speaking, well-performing adult CI users (13 unilateral and 15 bilateral), using implants from Cochlear, Med-El and Advanced Bionics, and a reference group of 20 listeners with normal hearing participated. RESULTS For the CI users, consonants were confused more often than vowels (58% versus 71% correct). Voiced consonants were confused more often than unvoiced (54% versus 64% correct). Voiced stops were often repeated as unvoiced, whereas unvoiced stops were never repeated as voiced. The nasals were repeated correctly in one third of the cases and confused with other nasals in one third of the cases. The real-word monosyllable score was significantly higher than the nonsense syllable score (76% versus 63% correct). CONCLUSIONS The study revealed a general devoicing bias for the stops and a high confusion rate of nasals with other nasals, which suggests that the low-frequency coding in CIs is insufficient. Furthermore, the nonsense syllable test exposed more perception errors than the real word test.
Collapse
Affiliation(s)
- Arne K Rødvik
- Department of Special Needs Education, University of Oslo, Oslo, Norway
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | | | - Ona B Wie
- Department of Special Needs Education, University of Oslo, Oslo, Norway
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | - Ole Tvete
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | | | - Juha T Silvola
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
- Akershus University Hospital, Lørenskog, Norway
- Department of Clinical Medicine, University of Oslo, Oslo, Norway
| |
Collapse
|
10
|
Thompson NJ, Dillon MT, Nix EP, Overton AB, Selleck AM, Dedmon MM, Brown KD. Variables Affecting Cochlear Implant Performance After Loss of Residual Hearing. Laryngoscope 2024; 134:1868-1873. [PMID: 37767794 DOI: 10.1002/lary.31066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 08/15/2023] [Accepted: 09/12/2023] [Indexed: 09/29/2023]
Abstract
OBJECTIVE Determine variables that influence post-activation performance for cochlear implant (CI) recipients who lost low-frequency acoustic hearing. METHODS A retrospective review evaluated CNC word recognition for adults with normal to moderately severe low-frequency hearing (preoperative unaided thresholds of ≤70 dB HL at 250 Hz) who were implanted between 2012 and 2021 at a tertiary academic center, lost functional acoustic hearing, and were fit with a CI-alone device. Performance scores were queried from the 1, 3, 6, 12, and 24-month post-activation visits. A linear mixed model evaluated the effects of age at implantation, array length (long vs. mid/short), and preoperative low-frequency hearing (normal to mild, moderate, and moderately severe) on speech recognition with a CI alone. RESULTS 113 patients met the inclusion criteria. There was a significant main effect of interval (p < 0.001), indicating improved word recognition post-activation despite loss of residual hearing. There were significant main effects of age (p = 0.029) and array length (p = 0.038), with no effect of preoperative low-frequency hearing (p = 0.171). There was a significant 2-way interaction between age and array length (p = 0.018), indicating that older adults with mid/short arrays performed more poorly than younger adults with long lateral wall arrays when functional acoustic hearing was lost. CONCLUSION CI recipients with preoperative functional low-frequency hearing experience a significant improvement in speech recognition with a CI alone as compared to preoperative performance-despite the loss of low-frequency hearing. Age and electrode array length may play a role in post-activation performance. These data have implications for the preoperative counseling and device selection for hearing preservation candidates. LEVEL OF EVIDENCE 4 Laryngoscope, 134:1868-1873, 2024.
Collapse
Affiliation(s)
- Nicholas J Thompson
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Margaret T Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Evan P Nix
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Andrea B Overton
- Audiology Department, UNC Health, Chapel Hill, North Carolina, U.S.A
| | - A Morgan Selleck
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Matthew M Dedmon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| | - Kevin D Brown
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, U.S.A
| |
Collapse
|
11
|
Tune S, Obleser J. Neural attentional filters and behavioural outcome follow independent individual trajectories over the adult lifespan. eLife 2024; 12:RP92079. [PMID: 38470243 DOI: 10.7554/elife.92079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/13/2024] Open
Abstract
Preserved communication abilities promote healthy ageing. To this end, the age-typical loss of sensory acuity might in part be compensated for by an individual's preserved attentional neural filtering. Is such a compensatory brain-behaviour link longitudinally stable? Can it predict individual change in listening behaviour? We here show that individual listening behaviour and neural filtering ability follow largely independent developmental trajectories modelling electroencephalographic and behavioural data of N = 105 ageing individuals (39-82 y). First, despite the expected decline in hearing-threshold-derived sensory acuity, listening-task performance proved stable over 2 y. Second, neural filtering and behaviour were correlated only within each separate measurement timepoint (T1, T2). Longitudinally, however, our results raise caution on attention-guided neural filtering metrics as predictors of individual trajectories in listening behaviour: neither neural filtering at T1 nor its 2-year change could predict individual 2-year behavioural change, under a combination of modelling strategies.
Collapse
Affiliation(s)
- Sarah Tune
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Center of Brain, Behavior, and Metabolism, University of Lübeck, Lübeck, Germany
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
12
|
Cupples L, Ching TYC, Hou S. Speech, language, functional communication, psychosocial outcomes and QOL in school-age children with congenital unilateral hearing loss. Front Pediatr 2024; 12:1282952. [PMID: 38510079 PMCID: PMC10950935 DOI: 10.3389/fped.2024.1282952] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 02/09/2024] [Indexed: 03/22/2024] Open
Abstract
Introduction Children with early-identified unilateral hearing loss (UHL) might be at risk for delays in early speech and language, functional communication, psychosocial skills, and quality of life (QOL). However, a paucity of relevant research prohibits strong conclusions. This study aimed to provide new evidence relevant to this issue. Methods Participants were 34 children, ages 9;0 to 12;7 (years;months), who were identified with UHL via newborn hearing screening. Nineteen children had been fitted with hearing devices, whereas 15 had not. Assessments included measures of speech perception and intelligibility; language and cognition; functional communication; psychosocial abilities; and QOL. Results and discussion As a group, the children scored significantly below the normative mean and more than one standard deviation below the typical range on speech perception in spatially separated noise, and significantly below the normative mean on written passage comprehension. Outcomes in other aspects appear typical. There was however considerable within participant variation in the children's degree of hearing loss over time, raising the possibility that this pattern of results might change as children get older. The current study also revealed that participants with higher levels of nonverbal ability demonstrated better general language skills and better ability to comprehend written passages. By contrast, neither perception of speech in collocated noise nor fitting with a hearing device accounted for unique variance in outcome measures. Future research should, however, evaluate the fitting of hearing devices using random assignment of participants to groups in order to avoid any confounding influence of degree of hearing loss or children's past/current level of progress.
Collapse
Affiliation(s)
- Linda Cupples
- Department of Linguistics, Centre for Language Sciences, Macquarie University, Sydney, NSW, Australia
| | - Teresa Y. C. Ching
- NextSense Institute, NextSense, Sydney, NSW, Australia
- Macquarie School of Education, Macquaarie University, Sydney, NSW, Australia
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane, QLD, Australia
| | - Sanna Hou
- National Acoustic Laboratories, Hearing Australia, Sydney, NSW, Australia
| |
Collapse
|
13
|
Cattani G, Rhebergen KS, Smit AL. An audibility model of the headband trial with a bone conduction device in single-sided deaf subjects. Int J Audiol 2024:1-9. [PMID: 38432678 DOI: 10.1080/14992027.2023.2299927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
OBJECTIVE Modelling the head-shadow effect compensation and speech recognition outcomes, we aimed to study the benefits of a bone conduction device (BCD) during the headband trial for single-sided deafened (SSD) subjects. DESIGN This study is based on a database of individual patient measurements, fitting parameters, and acoustic BCD properties retrospectively measured on a skull simulator or from existing literature. The sensation levels of the Bone-Conduction and Air-Conduction sound paths were compared, modelling three spatial conditions with speech in quiet. We calculated the phoneme score using the Speech Intelligibility Index for the three conditions in quiet and seven in noise. STUDY SAMPLE Eighty-five SSD adults fitted with BCD during headband trial. RESULTS According to our model, most subjects did not achieve a full head-shadow effect compensation with the signal at the BCD side and in front. The modelled speech recognition in the quiet conditions did not improve with the BCD on the headband. In noise, we found a slight improvement in some specific conditions and minimal worsening in others. CONCLUSIONS Based on an audibility model, this study challenges the fundamentals of a BCD headband trial in SSD subjects. Patients should be counselled regarding the potential outcome and alternative approaches.
Collapse
Affiliation(s)
- Guido Cattani
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Koenraad S Rhebergen
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | - Adriana L Smit
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Utrecht, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| |
Collapse
|
14
|
Kato M, Baese-Berk MM. The Effects of Acoustic and Semantic Enhancements on Perception of Native and Non-Native Speech. Lang Speech 2024; 67:40-71. [PMID: 36967604 DOI: 10.1177/00238309231156615] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Previous research has shown that native listeners benefit from clearly produced speech, as well as from predictable semantic context when these enhancements are delivered in native speech. However, it is unclear whether native listeners benefit from acoustic and semantic enhancements differently when listening to other varieties of speech, including non-native speech. The current study examines to what extent native English listeners benefit from acoustic and semantic cues present in native and non-native English speech. Native English listeners transcribed sentence final words that were of different levels of semantic predictability, produced in plain- or clear-speaking styles by Native English talkers and by native Mandarin talkers of higher- and lower-proficiency in English. The perception results demonstrated that listeners benefited from semantic cues in higher- and lower-proficiency talkers' speech (i.e., transcribed speech more accurately), but not from acoustic cues, even though higher-proficiency talkers did make substantial acoustic enhancements from plain to clear speech. The current results suggest that native listeners benefit more robustly from semantic cues than from acoustic cues when those cues are embedded in non-native speech.
Collapse
Affiliation(s)
- Misaki Kato
- Department of Linguistics, University of Oregon, USA
| | | |
Collapse
|
15
|
Busch CE, Schaffalitzky de Muckadell C, Morris DJ. Revisiting the effect of text complexity on Continuous Discourse Tracking using synthetic speech: Old tricks with new dogs. Clin Linguist Phon 2024; 38:172-183. [PMID: 36820623 DOI: 10.1080/02699206.2023.2183104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Revised: 01/04/2023] [Accepted: 02/14/2023] [Indexed: 06/18/2023]
Abstract
Continuous Discourse Tracking (CDT) is a functional test of speech perceptual ability, which has been criticised on account of the procedural variation inherent in the method. This study sought to reduce this variation by using synthetic speech, which was subsequently vocoded to simulate listening with a cochlear implant. We also assessed the complexity of three text excerpts with auditory (n = 10) and written Cloze tests (n = 10). These same passages were used in an auditory-only CDT experiment (n = 12) performed with the synthetic-vocoded material. Mean tracking rates were lower, and the number of blockages was higher for the most difficult text as determined by the Cloze results. We also noted some anomalous realisations from the speech synthesis, but these were unlikely to have contributed to the differences in tracking rates that were observed for text complexity. These results show that Cloze testing is suitable to predict text complexity for CDT performed with synthesised speech. Furthermore, they indicate that the use of text-speech synthesis is viable and may be a useful addition to rehabilitation where functional measures are used to assess communication aptitude.
Collapse
Affiliation(s)
- Caroline Esmann Busch
- Speech Pathology and Audiology, Department of Nordic Studies and Linguistics, University of Copenhagen, Copenhagen, Denmark
| | | | - David Jackson Morris
- Speech Pathology and Audiology, Department of Nordic Studies and Linguistics, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
16
|
Mechtenberg H, Giorio C, Myers EB. Pupil Dilation Reflects Perceptual Priorities During a Receptive Speech Task. Ear Hear 2024; 45:425-440. [PMID: 37882091 PMCID: PMC10868674 DOI: 10.1097/aud.0000000000001438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2022] [Accepted: 09/01/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The listening demand incurred by speech perception fluctuates in normal conversation. At the acoustic-phonetic level, natural variation in pronunciation acts as speedbumps to accurate lexical selection. Any given utterance may be more or less phonetically ambiguous-a problem that must be resolved by the listener to choose the correct word. This becomes especially apparent when considering two common speech registers-clear and casual-that have characteristically different levels of phonetic ambiguity. Clear speech prioritizes intelligibility through hyperarticulation which results in less ambiguity at the phonetic level, while casual speech tends to have a more collapsed acoustic space. We hypothesized that listeners would invest greater cognitive resources while listening to casual speech to resolve the increased amount of phonetic ambiguity, as compared with clear speech. To this end, we used pupillometry as an online measure of listening effort during perception of clear and casual continuous speech in two background conditions: quiet and noise. DESIGN Forty-eight participants performed a probe detection task while listening to spoken, nonsensical sentences (masked and unmasked) while recording pupil size. Pupil size was modeled using growth curve analysis to capture the dynamics of the pupil response as the sentence unfolded. RESULTS Pupil size during listening was sensitive to the presence of noise and speech register (clear/casual). Unsurprisingly, listeners had overall larger pupil dilations during speech perception in noise, replicating earlier work. The pupil dilation pattern for clear and casual sentences was considerably more complex. Pupil dilation during clear speech trials was slightly larger than for casual speech, across quiet and noisy backgrounds. CONCLUSIONS We suggest that listener motivation could explain the larger pupil dilations to clearly spoken speech. We propose that, bounded by the context of this task, listeners devoted more resources to perceiving the speech signal with the greatest acoustic/phonetic fidelity. Further, we unexpectedly found systematic differences in pupil dilation preceding the onset of the spoken sentences. Together, these data demonstrate that the pupillary system is not merely reactive but also adaptive-sensitive to both task structure and listener motivation to maximize accurate perception in a limited resource system.
Collapse
Affiliation(s)
- Hannah Mechtenberg
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA
| | - Cristal Giorio
- Department of Psychology, Pennsylvania State University, State College, Pennsylvania, USA
| | - Emily B. Myers
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut, USA
- Department of Speech, Language and Hearing Sciences, University of Connecticut, Storrs, Connecticut, USA
| |
Collapse
|
17
|
Miao Y, Rose H, Hosseini S. The Interaction Effect of Pronunciation and Lexicogrammar on Comprehensibility: A Case of Mandarin-Accented English. Lang Speech 2024; 67:3-18. [PMID: 36876584 DOI: 10.1177/00238309231156918] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Scholars have argued that comprehensibility (i.e., ease of understanding), not nativelike performance, should be prioritized in second language learning, which inspired numerous studies to explore factors affecting comprehensibility. However, most of these studies did not consider potential interaction effects of these factors, resulting in a limited understanding of comprehensibility and less precise implications. This study investigates how pronunciation and lexicogrammar influences the comprehensibility of Mandarin-accented English. A total of 687 listeners were randomly allocated into six groups and rated (a) one baseline and (b) one of six experimental recordings for comprehensibility on a 9-point scale. The baseline recording, a 60 s spontaneous speech by an L1 English speaker with an American accent, was the same across groups. The six 75-s experimental recordings were the same in content but differed in (a) speakers' degree of foreign accent (American, moderate Mandarin, and heavy Mandarin) and (b) lexicogrammar (with errors vs. without errors). The study found that pronunciation and lexicogrammar interacted to influence comprehensibility. That is, whether pronunciation affected comprehensibility depended on speakers' lexicogrammar, and vice versa. The results have implications for theory-building to refine comprehensibility, as well as for pedagogy and testing priorities.
Collapse
Affiliation(s)
| | | | - Sepideh Hosseini
- UC Berkeley Extension, USA; Peralta Community College District, USA; City College of San Francisco, USA
| |
Collapse
|
18
|
Deschamps ML, Sanderson P, Waxenegger H, Mohamed I, Loeb RG. Auditory Sequences Presented With Spearcons Support Better Multiple Patient Monitoring Than Single-Patient Alarms: A Preclinical Simulation. Hum Factors 2024; 66:872-890. [PMID: 35934986 DOI: 10.1177/00187208221116949] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVE A study of auditory displays for simulated patient monitoring compared the effectiveness of two sound categories (alarm sounds indicating general risk categories from international alarm standard IEC 60601-1-8 versus event-specific sounds according to the type of nursing unit) and two configurations (single-patient alarms versus multi-patient sequences). BACKGROUND Fieldwork in speciality-focused high dependency units (HDU) indicated that auditory alarms are ambiguous and do not identify which patient has a problem. We tested whether participants perform better using auditory displays that identify the relevant patient and problem. METHOD During simulated patient monitoring of four patients in a respiratory HDU, 60 non-clinicians heard either (a) IEC risk categories as single-patient alarm sounds, (b) event-specific categories as single-patient alarm sounds, (c) IEC risk categories in multi-patient sequences or (d) event-specific categories in multi-patient sequences. Participants performed a perceptual-motor task while monitoring patients; after detecting abnormal events, they identified the patient and the event. RESULTS Participants hearing multi-patient sequences made fewer wrong patient identifications than participants hearing single-patient alarms. Advantages of event-specific categories emerged when IEC risk category sounds indicated more than one potential event. Even when IEC and event-specific sounds indicated the same unique event, spearcons supported better event identification than did auditory icon sounds. CONCLUSION Auditory displays that unambiguously convey which patient is having what problem dramatically improve monitoring performance in a preclinical HDU simulation. APPLICATION Time-compressed speech assists development of detailed risk categories needed in specific HDU contexts, and multi-patient sound sequences allow multiple patient wellbeing to be monitored.
Collapse
Affiliation(s)
| | | | | | | | - Robert G Loeb
- The University of Queensland, Brisbane, Australia
- University of Florida, Gainesville, USA
| |
Collapse
|
19
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
20
|
Arora K, Plant K, Dawson P, Cowan R. Effect of reducing electrical stimulation rate on hearing performance of Nucleus ® cochlear implant recipients. Int J Audiol 2024:1-10. [PMID: 38420783 DOI: 10.1080/14992027.2024.2314620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 01/31/2024] [Indexed: 03/02/2024]
Abstract
OBJECTIVE To evaluate whether a 500 pulses per second per channel (pps/ch) rate would provide non-inferior hearing performance compared to the 900 pps/ch rate in the Advanced Combination Encoder (ACE™) sound coding strategy. DESIGN A repeated measures single-subject design was employed, wherein each subject served as their own control. All except one subject used 900 pps/ch at enrolment. After three weeks of using the alternative rate program, both programs were loaded into the sound processor for two more weeks of take-home use. Subjective performance, preference, words in quiet, sentences in babble, music quality, and fundamental frequency (F0) discrimination were assessed using a balanced design. STUDY SAMPLE Data from 18 subjects were analysed, with complete datasets available for 17 subjects. RESULTS Non-inferior performance on all clinical measures was shown for the lower rate program. Subjects' preference ratings were comparable for the programs, with 53% reporting no difference overall. When a preference was expressed, the 900 pps/ch condition was preferred more often. CONCLUSION Reducing the stimulation rate from 900 pps/ch to 500 pps/ch did not compromise the hearing outcomes evaluated in this study. A lower pulse rate in future cochlear implants could reduce power consumption, allowing for smaller batteries and processors.
Collapse
Affiliation(s)
- Komal Arora
- CochlearTM Limited, Melbourne, Australia
- The HEARing CRC, Melbourne, Australia
| | - Kerrie Plant
- CochlearTM Limited, Melbourne, Australia
- The HEARing CRC, Melbourne, Australia
| | - Pam Dawson
- CochlearTM Limited, Melbourne, Australia
- The HEARing CRC, Melbourne, Australia
| | - Robert Cowan
- The HEARing CRC, Melbourne, Australia
- The University of Melbourne, Melbourne, Australia
| |
Collapse
|
21
|
Sanchez K, Neergaard KD, Dias JW. Editorial: Multisensory speech in perception and production. Front Hum Neurosci 2024; 18:1380061. [PMID: 38439940 PMCID: PMC10910343 DOI: 10.3389/fnhum.2024.1380061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 02/06/2024] [Indexed: 03/06/2024] Open
Affiliation(s)
- Kauyumari Sanchez
- Department of Psychology, Cal Poly Humboldt, Arcata, CA, United States
| | - Karl David Neergaard
- Institute for the Future of Education Europe, Tecnologico de Monterrey, Comillas, Spain
| | - James W. Dias
- Medical University of South Carolina, Charleston, SC, United States
| |
Collapse
|
22
|
He S, Skidmore J, Bruce IC, Oleson JJ, Yuan Y. Peripheral neural synchrony in post-lingually deafened adult cochlear implant users. medRxiv 2024:2023.07.07.23292369. [PMID: 37461681 PMCID: PMC10350140 DOI: 10.1101/2023.07.07.23292369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Objective This paper reports a noninvasive method for quantifying neural synchrony in the cochlear nerve (i.e., peripheral neural synchrony) in cochlear implant (CI) users, which allows for evaluating this physiological phenomenon in human CI users for the first time in the literature. In addition, this study assessed how peripheral neural synchrony was correlated with temporal resolution acuity and speech perception outcomes measured in quiet and in noise in post-lingually deafened adult CI users. It tested the hypothesis that peripheral neural synchrony was an important factor for temporal resolution acuity and speech perception outcomes in noise in post-lingually deafened adult CI users. Design Study participants included 24 post-lingually deafened adult CI users with a Cochlear™ Nucleus® device. Three study participants were implanted bilaterally, and each ear was tested separately. For each of the 27 implanted ears tested in this study, 400 sweeps of the electrically evoked compound action potential (eCAP) were measured at four electrode locations across the electrode array. Peripheral neural synchrony was quantified at each electrode location using the phase locking value (PLV), which is a measure of trial-by-trial phase coherence among eCAP sweeps/trials. Temporal resolution acuity was evaluated by measuring the within-channel gap detection threshold (GDT) using a three-alternative, forced-choice procedure in a subgroup of 20 participants (23 implanted ears). For each ear tested in these participants, GDTs were measured at two electrode locations with a large difference in PLVs. For 26 implanted ears tested in 23 participants, speech perception performance was evaluated using Consonant-Nucleus-Consonant (CNC) word lists presented in quiet and in noise at signal-to-noise ratios (SNRs) of +10 and +5 dB. Linear Mixed effect Models were used to evaluate the effect of electrode location on the PLV and the effect of the PLV on GDT after controlling for the stimulation level effects. Pearson product-moment correlation tests were used to assess the correlations between PLVs, CNC word scores measured in different conditions, and the degree of noise effect on CNC word scores. Results There was a significant effect of electrode location on the PLV after controlling for the effect of stimulation level. There was a significant effect of the PLV on GDT after controlling for the effects of stimulation level, where higher PLVs (greater synchrony) led to lower GDTs (better temporal resolution acuity). PLVs were not significantly correlated with CNC word scores measured in any listening condition or the effect of competing background noise presented at a SNR of +10 dB on CNC word scores. In contrast, there was a significant negative correlation between the PLV and the degree of noise effect on CNC word scores for a competing background noise presented at a SNR of +5 dB, where higher PLVs (greater synchrony) correlated with smaller noise effects on CNC word scores. Conclusions This newly developed method can be used to assess peripheral neural synchrony in CI users, a physiological phenomenon that has not been systematically evaluated in electrical hearing. Poorer peripheral neural synchrony leads to lower temporal resolution acuity and is correlated with a larger detrimental effect of competing background noise presented at a SNR of 5 dB on speech perception performance in post-lingually deafened adult CI users.
Collapse
Affiliation(s)
- Shuman He
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212
- Department of Audiology, Nationwide Children’s Hospital, 700 Children’s Drive, Columbus, OH 43205
| | - Jeffrey Skidmore
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212
| | - Ian C. Bruce
- Department of Electrical & Computer Engineering, McMaster University, Hamilton, ON, L8S 4K1, Canada
| | - Jacob J. Oleson
- Department of Biostatistics, The University of Iowa, Iowa City, IA 52242
| | - Yi Yuan
- Department of Otolaryngology – Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212
| |
Collapse
|
23
|
Meyer L, Araiza-Illan G, Rachman L, Gaudrain E, Başkent D. Evaluating speech-in- speech perception via a humanoid robot. Front Neurosci 2024; 18:1293120. [PMID: 38406584 PMCID: PMC10884269 DOI: 10.3389/fnins.2024.1293120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 01/15/2024] [Indexed: 02/27/2024] Open
Abstract
Introduction Underlying mechanisms of speech perception masked by background speakers, a common daily listening condition, are often investigated using various and lengthy psychophysical tests. The presence of a social agent, such as an interactive humanoid NAO robot, may help maintain engagement and attention. However, such robots potentially have limited sound quality or processing speed. Methods As a first step toward the use of NAO in psychophysical testing of speech- in-speech perception, we compared normal-hearing young adults' performance when using the standard computer interface to that when using a NAO robot to introduce the test and present all corresponding stimuli. Target sentences were presented with colour and number keywords in the presence of competing masker speech at varying target-to-masker ratios. Sentences were produced by the same speaker, but voice differences between the target and masker were introduced using speech synthesis methods. To assess test performance, speech intelligibility and data collection duration were compared between the computer and NAO setups. Human-robot interaction was assessed using the Negative Attitude Toward Robot Scale (NARS) and quantification of behavioural cues (backchannels). Results Speech intelligibility results showed functional similarity between the computer and NAO setups. Data collection durations were longer when using NAO. NARS results showed participants had a relatively positive attitude toward "situations of interactions" with robots prior to the experiment, but otherwise showed neutral attitudes toward the "social influence" of and "emotions in interaction" with robots. The presence of more positive backchannels when using NAO suggest higher engagement with the robot in comparison to the computer. Discussion Overall, the study presents the potential of the NAO for presenting speech materials and collecting psychophysical measurements for speech-in-speech perception.
Collapse
Affiliation(s)
- Luke Meyer
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- University Medical Center Groningen, W.J. Kolff Institute for Biomedical Engineering and Materials Science, University of Groningen, Groningen, Netherlands
| | - Gloria Araiza-Illan
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- University Medical Center Groningen, W.J. Kolff Institute for Biomedical Engineering and Materials Science, University of Groningen, Groningen, Netherlands
| | - Laura Rachman
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- University Medical Center Groningen, W.J. Kolff Institute for Biomedical Engineering and Materials Science, University of Groningen, Groningen, Netherlands
- Pento Audiology Centre, Zwolle, Netherlands
| | - Etienne Gaudrain
- Lyon Neuroscience Research Center, CNRS UMR 5292, INSERM UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, Netherlands
- University Medical Center Groningen, W.J. Kolff Institute for Biomedical Engineering and Materials Science, University of Groningen, Groningen, Netherlands
| |
Collapse
|
24
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
25
|
Na E, Toupin-April K, Olds J, Chen J, Fitzpatrick EM. Benefits and risks related to cochlear implantation for children with residual hearing: a systematic review. Int J Audiol 2024; 63:75-86. [PMID: 36524877 DOI: 10.1080/14992027.2022.2155879] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 11/24/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVE This study aimed to synthesise information concerning the potential benefits and risks related to cochlear implants (CIs) versus hearing aids (HAs) in children with residual hearing. DESIGN A systematic review of articles published from January 2003 to January 2019 was conducted. STUDY SAMPLE Our review included studies that compared the benefits and risks of CIs versus HAs in children (≤18 years old) with residual hearing. A total of 3265 citations were identified; 8 studies met inclusion criteria. RESULTS Children with CIs showed significantly better speech perception scores post-CI than pre-CI. There was limited evidence related to improvement in everyday auditory performance, and the results showed non-significant improvement in speech intelligibility. One study on social-emotional functioning suggested benefits from CIs. In four studies, 37.2% (16/43) of children showed loss of residual hearing and 14.0% (8/57) had discontinued or limited use of their device. CONCLUSIONS Children with CIs showed improvement in speech perception outcomes compared to those with HAs. However, due to the limited number of studies and information to guide decision-making related to other areas of development, it will be important to conduct further research of both benefits and risks of CIs in this specific population to facilitate decision-making.
Collapse
Affiliation(s)
- Eunjung Na
- School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario Research Institute, Ottawa, Canada
| | - Karine Toupin-April
- School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario Research Institute, Ottawa, Canada
- Department of Pediatrics, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Janet Olds
- Children's Hospital of Eastern Ontario Research Institute, Ottawa, Canada
- Children's Hospital of Eastern Ontario, Ottawa, Canada
- Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine, University of Ottawa, Ottawa, Canada
| | - Jianyong Chen
- Department of Otorhinolaryngology Head and Neck Surgery, Xinhua Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Elizabeth M Fitzpatrick
- School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, Canada
- Children's Hospital of Eastern Ontario Research Institute, Ottawa, Canada
| |
Collapse
|
26
|
Harwood V, Garcia-Sierra A, Diaz R, Jelfs E, Baron A. Event Related Potentials to Native Speech Contrasts Predicts Word Reading Abilities in Early School-Aged Children. J Neurolinguistics 2024; 69:101161. [PMID: 37746630 PMCID: PMC10512698 DOI: 10.1016/j.jneuroling.2023.101161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
Speech perception skills have been implicated in the development of phoneme-grapheme correspondence, yet the exact nature of speech perception and word reading ability remains unknown. We investigate phonological sensitivity to native (English) and nonnative (Spanish) speech syllables within an auditory oddball paradigm using event related potentials (ERPs) collected from lateral temporal electrode sites in 33 monolingual English-speaking children aged 6-8 years (N=33). We further explore the relationship between ERPs to English word reading abilities for this group. Results revealed that language stimuli (English, Spanish), ERP condition (standard, deviant), and hemisphere (left, right) all influenced the lateral N1 component. ERPs recorded from deviant English stimuli were significantly more negative within the left hemisphere compared to all other recorded ERPs. Mean amplitude differences within the N1 in left lateral electrode sites recorded in response to English phoneme contrasts significantly predicted English word reading abilities within this sample. Results indicate that speech perception of native contrasts recorded in left temporal electrode sites for the N1 component are linked to English word reading abilities in early school-aged children.
Collapse
Affiliation(s)
- Vanessa Harwood
- University of Rhode Island, 25 W Independence Way, Kingston, RI 02881
| | | | - Raphael Diaz
- University of Rhode Island, 25 W Independence Way, Kingston, RI 02881
| | - Emily Jelfs
- University of Rhode Island, 25 W Independence Way, Kingston, RI 02881
| | - Alisa Baron
- University of Rhode Island, 25 W Independence Way, Kingston, RI 02881
| |
Collapse
|
27
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
28
|
Luo J, Qin P, Bi Q, Wu K, Gong G. Individual variability in functional connectivity of human auditory cortex. Cereb Cortex 2024; 34:bhae007. [PMID: 38282455 DOI: 10.1093/cercor/bhae007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/30/2024] Open
Abstract
Individual variability in functional connectivity underlies individual differences in cognition and behaviors, yet its association with functional specialization in the auditory cortex remains elusive. Using resting-state functional magnetic resonance imaging data from the Human Connectome Project, this study was designed to investigate the spatial distribution of auditory cortex individual variability in its whole-brain functional network architecture. An inherent hierarchical axis of the variability was discerned, which radiates from the medial to lateral orientation, with the left auditory cortex demonstrating more pronounced variations than the right. This variability exhibited a significant correlation with the variations in structural and functional metrics in the auditory cortex. Four auditory cortex subregions, which were identified from a clustering analysis based on this variability, exhibited unique connectional fingerprints and cognitive maps, with certain subregions showing specificity to speech perception functional activation. Moreover, the lateralization of the connectional fingerprint exhibited a U-shaped trajectory across the subregions. These findings emphasize the role of individual variability in functional connectivity in understanding cortical functional organization, as well as in revealing its association with functional specialization from the activation, connectome, and cognition perspectives.
Collapse
Affiliation(s)
- Junhao Luo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Peipei Qin
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Qiuhui Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
| | - Ke Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
29
|
Carolan PJ, Heinrich A, Munro KJ, Millman RE. Divergent effects of listening demands and evaluative threat on listening effort in online and laboratory settings. Front Psychol 2024; 15:1171873. [PMID: 38333064 PMCID: PMC10850315 DOI: 10.3389/fpsyg.2024.1171873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024] Open
Abstract
Objective Listening effort (LE) varies as a function of listening demands, motivation and resource availability, among other things. Motivation is posited to have a greater influence on listening effort under high, compared to low, listening demands. Methods To test this prediction, we manipulated the listening demands of a speech recognition task using tone vocoders to create moderate and high listening demand conditions. We manipulated motivation using evaluative threat, i.e., informing participants that they must reach a particular "score" for their results to be usable. Resource availability was assessed by means of working memory span and included as a fixed effects predictor. Outcome measures were indices of LE, including reaction times (RTs), self-rated work and self-rated tiredness, in addition to task performance (correct response rates). Given the recent popularity of online studies, we also wanted to examine the effect of experimental context (online vs. laboratory) on the efficacy of manipulations of listening demands and motivation. We carried out two highly similar experiments with two groups of 37 young adults, a laboratory experiment and an online experiment. To make listening demands comparable between the two studies, vocoder settings had to differ. All results were analysed using linear mixed models. Results Results showed that under laboratory conditions, listening demands affected all outcomes, with significantly lower correct response rates, slower RTs and greater self-rated work with higher listening demands. In the online study, listening demands only affected RTs. In addition, motivation affected self-rated work. Resource availability was only a significant predictor for RTs in the online study. Discussion These results show that the influence of motivation and listening demands on LE depends on the type of outcome measures used and the experimental context. It may also depend on the exact vocoder settings. A controlled laboratory settings and/or particular vocoder settings may be necessary to observe all expected effects of listening demands and motivation.
Collapse
Affiliation(s)
- Peter J. Carolan
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Antje Heinrich
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Kevin J. Munro
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Rebecca E. Millman
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
30
|
Orepic P, Truccolo W, Halgren E, Cash SS, Giraud AL, Proix T. Neural manifolds carry reactivation of phonetic representations during semantic processing. bioRxiv 2024:2023.10.30.564638. [PMID: 37961305 PMCID: PMC10634964 DOI: 10.1101/2023.10.30.564638] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
Traditional models of speech perception posit that neural activity encodes speech through a hierarchy of cognitive processes, from low-level representations of acoustic and phonetic features to high-level semantic encoding. Yet it remains unknown how neural representations are transformed across levels of the speech hierarchy. Here, we analyzed unique microelectrode array recordings of neuronal spiking activity from the human left anterior superior temporal gyrus, a brain region at the interface between phonetic and semantic speech processing, during a semantic categorization task and natural speech perception. We identified distinct neural manifolds for semantic and phonetic features, with a functional separation of the corresponding low-dimensional trajectories. Moreover, phonetic and semantic representations were encoded concurrently and reflected in power increases in the beta and low-gamma local field potentials, suggesting top-down predictive and bottom-up cumulative processes. Our results are the first to demonstrate mechanisms for hierarchical speech transformations that are specific to neuronal population dynamics.
Collapse
Affiliation(s)
- Pavo Orepic
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Wilson Truccolo
- Department of Neuroscience, Brown University, Providence, Rhode Island, United States of America
- Carney Institute for Brain Science, Brown University, Providence, Rhode Island, United States of America
| | - Eric Halgren
- Department of Neuroscience & Radiology, University of California San Diego, La Jolla, California, United States of America
| | - Sydney S Cash
- Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston, Massachusetts, United States of America
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France
| | - Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
31
|
Chen Y, Wang S, Yang L, Liu Y, Fu X, Wang Y, Zhang X, Wang S. Features of the speech processing network in post- and prelingually deaf cochlear implant users. Cereb Cortex 2024; 34:bhad417. [PMID: 38163443 DOI: 10.1093/cercor/bhad417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Revised: 10/13/2023] [Accepted: 10/14/2023] [Indexed: 01/03/2024] Open
Abstract
The onset of hearing loss can lead to altered brain structure and functions. However, hearing restoration may also result in distinct cortical reorganization. A differential pattern of functional remodeling was observed between post- and prelingual cochlear implant users, but it remains unclear how these speech processing networks are reorganized after cochlear implantation. To explore the impact of language acquisition and hearing restoration on speech perception in cochlear implant users, we conducted assessments of brain activation, functional connectivity, and graph theory-based analysis using functional near-infrared spectroscopy. We examined the effects of speech-in-noise stimuli on three groups: postlingual cochlear implant users (n = 12), prelingual cochlear implant users (n = 10), and age-matched individuals with hearing controls (HC) (n = 22). The activation of auditory-related areas in cochlear implant users showed a lower response compared with the HC group. Wernicke's area and Broca's area demonstrated differences network attributes in speech processing networks in post- and prelingual cochlear implant users. In addition, cochlear implant users maintain a high efficiency of the speech processing network to process speech information. Taken together, our results characterize the speech processing networks, in varying noise environments, in post- and prelingual cochlear implant users and provide new insights for theories of how implantation modes impact remodeling of the speech processing functional networks.
Collapse
Affiliation(s)
- Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Liu Yang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, No. 10, Xitoutiao, YouAnMen, Fengtai District, Beijing 100069, China
| | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology-Head and Neck Surgery, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing 100005, China
| |
Collapse
|
32
|
Dorsi J, Lacey S, Sathian K. Multisensory and lexical information in speech perception. Front Hum Neurosci 2024; 17:1331129. [PMID: 38259332 PMCID: PMC10800662 DOI: 10.3389/fnhum.2023.1331129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 12/11/2023] [Indexed: 01/24/2024] Open
Abstract
Both multisensory and lexical information are known to influence the perception of speech. However, an open question remains: is either source more fundamental to perceiving speech? In this perspective, we review the literature and argue that multisensory information plays a more fundamental role in speech perception than lexical information. Three sets of findings support this conclusion: first, reaction times and electroencephalographic signal latencies indicate that the effects of multisensory information on speech processing seem to occur earlier than the effects of lexical information. Second, non-auditory sensory input influences the perception of features that differentiate phonetic categories; thus, multisensory information determines what lexical information is ultimately processed. Finally, there is evidence that multisensory information helps form some lexical information as part of a phenomenon known as sound symbolism. These findings support a framework of speech perception that, while acknowledging the influential roles of both multisensory and lexical information, holds that multisensory information is more fundamental to the process.
Collapse
Affiliation(s)
- Josh Dorsi
- Department of Neurology, Penn State College of Medicine, Hershey, PA, United States
| | - Simon Lacey
- Department of Neurology, Penn State College of Medicine, Hershey, PA, United States
- Department of Neural and Behavioral Sciences, Penn State College of Medicine, Hershey, PA, United States
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA, United States
| | - K. Sathian
- Department of Neurology, Penn State College of Medicine, Hershey, PA, United States
- Department of Neural and Behavioral Sciences, Penn State College of Medicine, Hershey, PA, United States
- Department of Psychology, Penn State Colleges of Medicine and Liberal Arts, Hershey, PA, United States
| |
Collapse
|
33
|
Richard C, Young NM. Editorial: Collection on cochlear implantation and speech perception. Front Hum Neurosci 2024; 17:1344875. [PMID: 38239303 PMCID: PMC10794295 DOI: 10.3389/fnhum.2023.1344875] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2023] [Accepted: 12/07/2023] [Indexed: 01/22/2024] Open
Affiliation(s)
- Celine Richard
- Department of Otolaryngology-Head and Neck Surgery, University of Tennessee Health Science Center, Memphis, TN, United States
- Division of Otolaryngology-Head and Neck Surgery, Lebonheur Children's Hospital, Memphis, TN, United States
- Division of Otolaryngology-Head and Neck Surgery, St. Jude Children's Research Hospital, Memphis, TN, United States
| | - Nancy M. Young
- Department of Otolaryngology-Head and Neck Surgery, Feinberg School of Medicine, Northwestern University, Chicago, IL, United States
- Division of Otolaryngology-Head and Neck Surgery, Ann and Robert H. Lurie Children's Hospital of Chicago, Chicago, IL, United States
| |
Collapse
|
34
|
Araiza-Illan G, Meyer L, Truong KP, Başkent D. Automated Speech Audiometry: Can It Work Using Open-Source Pre-Trained Kaldi-NL Automatic Speech Recognition? Trends Hear 2024; 28:23312165241229057. [PMID: 38483979 PMCID: PMC10943752 DOI: 10.1177/23312165241229057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 01/05/2024] [Accepted: 01/11/2024] [Indexed: 03/18/2024] Open
Abstract
A practical speech audiometry tool is the digits-in-noise (DIN) test for hearing screening of populations of varying ages and hearing status. The test is usually conducted by a human supervisor (e.g., clinician), who scores the responses spoken by the listener, or online, where software scores the responses entered by the listener. The test has 24-digit triplets presented in an adaptive staircase procedure, resulting in a speech reception threshold (SRT). We propose an alternative automated DIN test setup that can evaluate spoken responses whilst conducted without a human supervisor, using the open-source automatic speech recognition toolkit, Kaldi-NL. Thirty self-reported normal-hearing Dutch adults (19-64 years) completed one DIN + Kaldi-NL test. Their spoken responses were recorded and used for evaluating the transcript of decoded responses by Kaldi-NL. Study 1 evaluated the Kaldi-NL performance through its word error rate (WER), percentage of summed decoding errors regarding only digits found in the transcript compared to the total number of digits present in the spoken responses. Average WER across participants was 5.0% (range 0-48%, SD = 8.8%), with average decoding errors in three triplets per participant. Study 2 analyzed the effect that triplets with decoding errors from Kaldi-NL had on the DIN test output (SRT), using bootstrapping simulations. Previous research indicated 0.70 dB as the typical within-subject SRT variability for normal-hearing adults. Study 2 showed that up to four triplets with decoding errors produce SRT variations within this range, suggesting that our proposed setup could be feasible for clinical applications.
Collapse
Affiliation(s)
- Gloria Araiza-Illan
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Luke Meyer
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Khiet P. Truong
- Human Media Interaction, University of Twente, Enschede, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology, Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
35
|
Fereczkowski M, Sanchez-Lopez RH, Christiansen S, Neher T. Amplitude Compression for Preventing Rollover at Above-Conversational Speech Levels. Trends Hear 2024; 28:23312165231224597. [PMID: 38179670 PMCID: PMC10771052 DOI: 10.1177/23312165231224597] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2023] [Revised: 12/15/2023] [Accepted: 12/18/2023] [Indexed: 01/06/2024] Open
Abstract
Hearing aids provide nonlinear amplification to improve speech audibility and loudness perception. While more audibility typically increases speech intelligibility at low levels, the same is not true for above-conversational levels, where decreases in intelligibility ("rollover") can occur. In a previous study, we found rollover in speech intelligibility measurements made in quiet for 35 out of 74 test ears with a hearing loss. Furthermore, we found rollover occurrence in quiet to be associated with poorer speech intelligibility in noise as measured with linear amplification. Here, we retested 16 participants with rollover with three amplitude-compression settings. Two were designed to prevent rollover by applying slow- or fast-acting compression with a 5:1 compression ratio around the "sweet spot," that is, the area in an individual performance-intensity function with high intelligibility and listening comfort. The third, reference setting used gains and compression ratios prescribed by the "National Acoustic Laboratories Non-Linear 1" rule. Speech intelligibility was assessed in quiet and in noise. Pairwise preference judgments were also collected. For speech levels of 70 dB SPL and above, slow-acting sweet-spot compression gave better intelligibility in quiet and noise than the reference setting. Additionally, the participants clearly preferred slow-acting sweet-spot compression over the other settings. At lower levels, the three settings gave comparable speech intelligibility, and the participants preferred the reference setting over both sweet-spot settings. Overall, these results suggest that, for listeners with rollover, slow-acting sweet-spot compression is beneficial at 70 dB SPL and above, while at lower levels clinically established gain targets are more suited.
Collapse
Affiliation(s)
- Michal Fereczkowski
- Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | | | - Stine Christiansen
- Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| | - Tobias Neher
- Institute of Clinical Research, Faculty of Health Sciences, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
36
|
Lelic D, Nielsen LLA, Pedersen AK, Neher T. Focusing on Positive Listening Experiences Improves Speech Intelligibility in Experienced Hearing Aid Users. Trends Hear 2024; 28:23312165241246616. [PMID: 38656770 PMCID: PMC11044800 DOI: 10.1177/23312165241246616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 03/08/2024] [Accepted: 03/26/2024] [Indexed: 04/26/2024] Open
Abstract
Negativity bias is a cognitive bias that results in negative events being perceptually more salient than positive ones. For hearing care, this means that hearing aid benefits can potentially be overshadowed by adverse experiences. Research has shown that sustaining focus on positive experiences has the potential to mitigate negativity bias. The purpose of the current study was to investigate whether a positive focus (PF) intervention can improve speech-in-noise abilities for experienced hearing aid users. Thirty participants were randomly allocated to a control or PF group (N = 2 × 15). Prior to hearing aid fitting, all participants filled out the short form of the Speech, Spatial and Qualities of Hearing scale (SSQ12) based on their own hearing aids. At the first visit, they were fitted with study hearing aids, and speech-in-noise testing was performed. Both groups then wore the study hearing aids for two weeks and sent daily text messages reporting hours of hearing aid use to an experimenter. In addition, the PF group was instructed to focus on positive listening experiences and to also report them in the daily text messages. After the 2-week trial, all participants filled out the SSQ12 questionnaire based on the study hearing aids and completed the speech-in-noise testing again. Speech-in-noise performance and SSQ12 Qualities score were improved for the PF group but not for the control group. This finding indicates that the PF intervention can improve subjective and objective hearing aid benefits.
Collapse
Affiliation(s)
| | | | | | - Tobias Neher
- Department of Clinical Research, University of Southern Denmark, Odense, Denmark
- Research Unit for ORL – Head & Neck Surgery and Audiology, Odense University Hospital & University of Southern Denmark, Odense, Denmark
| |
Collapse
|
37
|
Guerra G, Tierney A, Tijms J, Vaessen A, Bonte M, Dick F. Attentional modulation of neural sound tracking in children with and without dyslexia. Dev Sci 2024; 27:e13420. [PMID: 37350014 DOI: 10.1111/desc.13420] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2022] [Revised: 04/09/2023] [Accepted: 05/26/2023] [Indexed: 06/24/2023]
Abstract
Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.
Collapse
Affiliation(s)
- Giada Guerra
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
- Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Adam Tierney
- Centre for Brain and Cognitive Development, Birkbeck College, University of London, London, UK
| | - Jurgen Tijms
- RID, Amsterdam, Netherlands
- Rudolf Berlin Center, Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | | | - Milene Bonte
- Maastricht Brain Imaging Center and Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Frederic Dick
- Division of Psychology & Language Sciences, UCL, London, UK
| |
Collapse
|
38
|
Hunter CR, Abrahamyan H. Sensitivity, reliability and convergent validity of sequential dual-task measures of listening effort. Int J Audiol 2024; 63:30-39. [PMID: 36427054 DOI: 10.1080/14992027.2022.2145513] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Accepted: 11/04/2022] [Indexed: 11/27/2022]
Abstract
OBJECTIVE The aim of the current study was to assess the sensitivity, reliability and convergent validity of objective measures of listening effort collected in a sequential dual-task. DESIGN On each trial, participants viewed a set of digits and listened to a spoken sentence presented at one of a range of signal-to-noise ratios (SNR) and then typed the sentence-final word and recalled the digits. Listening effort measures included word response time, digit recall accuracy and digit response time. In Experiment 1, SNR on each trial was randomised. In Experiment 2, SNR varied in a blocked design, and in each block self-reported listening effort was also collected. STUDY SAMPLES Separate groups of 40 young adults participated in each experiment. RESULTS Effects of SNR were observed for all measures. Linear effects of SNR were generally observed even with word recognition accuracy factored out of the models. Among the objective measures, reliability was excellent, and repeated-measures correlations, though not between-subjects correlations, were nearly all significant. CONCLUSION The objective measures assessed appear to be sensitive and reliable indices of listening effort that are non-redundant with speech intelligibility and have strong within-participants convergent validity. Results support use of these measures in future studies of listening effort.
Collapse
Affiliation(s)
- Cynthia R Hunter
- Speech Perception, Cognition, and Hearing Laboratory, Department of Speech-Language-Hearing: Sciences and Disorders, University of Kansas, Lawrence, KS, USA
| | - Hayk Abrahamyan
- Language Perception Laboratory, Department of Psychology, State University of New York at Buffalo, Buffalo, NY, USA
| |
Collapse
|
39
|
Blockmans L, Kievit R, Wouters J, Ghesquière P, Vandermosten M. Dynamics of cognitive predictors during reading acquisition in a sample of children overrepresented for dyslexia risk. Dev Sci 2024; 27:e13412. [PMID: 37219071 DOI: 10.1111/desc.13412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 04/07/2023] [Accepted: 05/09/2023] [Indexed: 05/24/2023]
Abstract
Literacy acquisition is a complex process with genetic and environmental factors influencing cognitive and neural processes associated with reading. Previous research identified factors that predict word reading fluency (WRF), including phonological awareness (PA), rapid automatized naming (RAN), and speech-in-noise perception (SPIN). Recent theoretical accounts suggest dynamic interactions between these factors and reading, but direct investigations of such dynamics are lacking. Here, we investigated the dynamic effect of phonological processing and speech perception on WRF. More specifically, we evaluated the dynamic influence of PA, RAN, and SPIN measured in kindergarten (the year prior to formal reading instruction), first grade (the first year of formal reading instruction) and second grade on WRF in second and third grade. We also assessed the effect of an indirect proxy of family risk for reading difficulties using a parental questionnaire (Adult Reading History Questionnaire, ARHQ). We applied path modeling in a longitudinal sample of 162 Dutch-speaking children of whom the majority was selected to have an increased family and/or cognitive risk for dyslexia. We showed that parental ARHQ had a significant effect on WRF, RAN and SPIN, but unexpectedly not on PA. We also found effects of RAN and PA directly on WRF that were limited to first and second grade respectively, in contrast to previous research reporting pre-reading PA effects and prolonged RAN effects throughout reading acquisition. Our study provides important new insights into early prediction of later word reading abilities and into the optimal time window to target a specific reading-related subskill during intervention.
Collapse
Affiliation(s)
- Lauren Blockmans
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium
| | - Rogier Kievit
- Donders Institute for Brain, Cognition and Behavior, Radboud University, Nijmegen, The Netherlands
| | - Jan Wouters
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven, Leuven, Belgium
| | - Maaike Vandermosten
- Research Group ExpORL, Department of Neuroscience, KU Leuven, Leuven, Belgium
| |
Collapse
|
40
|
Noble AR, Halverson DM, Resnick J, Broncheau M, Rubinstein JT, Horn DL. Spectral Resolution and Speech Perception in Cochlear Implanted School-Aged Children. Otolaryngol Head Neck Surg 2024; 170:230-238. [PMID: 37365946 PMCID: PMC10836047 DOI: 10.1002/ohn.408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 05/03/2023] [Accepted: 06/04/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE Cochlear implantation of prelingually deaf infants provides auditory input sufficient to develop spoken language; however, outcomes remain variable. Inability to participate in speech perception testing limits testing device efficacy in young listeners. In postlingually implanted adults (aCI), speech perception correlates with spectral resolution an ability that relies independently on frequency resolution (FR) and spectral modulation sensitivity (SMS). The correlation of spectral resolution to speech perception is unknown in prelingually implanted children (cCI). In this study, FR and SMS were measured using a spectral ripple discrimination (SRD) task and were correlated with vowel and consonant identification. It was hypothesized that prelingually deaf cCI would show immature SMS relative to postlingually deaf aCI and that FR would correlate with speech identification. STUDY DESIGN Cross-sectional study. SETTING In-person, booth testing. METHODS SRD was used to determine the highest spectral ripple density perceived at various modulation depths. FR and SMS were derived from spectral modulation transfer functions. Vowel and consonant identification was measured; SRD performance and speech identification were analyzed for correlation. RESULTS Fifteen prelingually implanted cCI and 13 postlingually implanted aCI were included. FR and SMS were similar between cCI and aCI. Better FR was associated with better speech identification for most measures. CONCLUSION Prelingually implanted cCI demonstrated adult-like FR and SMS; additionally, FR correlated with speech identification. FR may be a measure of CI efficacy in young listeners.
Collapse
Affiliation(s)
- Anisha R. Noble
- Division of Pediatric Otolaryngology – Head and Neck Surgery, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Destinee M. Halverson
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jesse Resnick
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Mariette Broncheau
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jay T. Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
41
|
Sipari S, Iso-Mustajärvi M, Linder P, Dietz A. Insertion Results and Hearing Outcomes of a Slim Lateral Wall Electrode. J Int Adv Otol 2024; 20:1-7. [PMID: 38454281 PMCID: PMC10895868 DOI: 10.5152/iao.2024.22962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2022] [Accepted: 05/14/2023] [Indexed: 03/09/2024] Open
Abstract
BACKGROUND The clinical outcomes of cochlear implantation vary for several reasons. It is necessary to study the different electrodes and variables for further development. The aim of this study is to report the clinical outcomes of a new slim lateral wall electrode (SlimJ). METHODS Data of 25 cochlear implantations in 23 patients with the SlimJ electrode were retrospectively collected. The insertion results were assessed by image fusion of the preoperative computed tomography (CT), magnetic resonance imaging (MRI), and postoperative cone-beam CT. The hearing outcomes were evaluated by the improvement of speech recognition in noise, measured preoperatively and at follow-up. Postoperative pure-tone thresholds were obtained in cases with preoperative functional low frequency hearing [PTA (0.125-0.5 kHz) ≤ 80 dB HL]. RESULTS The preoperative mean speech reception threshold (SRT) was +0.6 dB signal-to-noise ratio (SNR) (SD ± 4.2 dB) and the postoperative -3.5 dB SNR (SD ± 2.3 dB). The improvements between the preoperative and postoperative SRT levels ranged from 0.0 to 15.1 dB, with a mean improvement of 4.2 dB (SD ± 3.6 dB). Residual hearing in low frequencies (mean PTA(125-500 Hz)) was preserved within 30 dB HL in 70% and within 15 dB HL in 40% of patients who had preoperatively functional low frequency hearing. Mean insertion depth angle (IDA) was 401° (SD ± 41°). We observed scalar translocations from scala tympani to scala vestibuli in 2 ears (9%). CONCLUSION The relatively atraumatic insertion characteristics make the SlimJ array feasible for hearing preservation cochlear implantation. The hearing outcomes are comparable to those reported for other electrodes and devices.
Collapse
Affiliation(s)
- Sini Sipari
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
- Department of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Matti Iso-Mustajärvi
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
- Department of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Pia Linder
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
- Department of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| | - Aarno Dietz
- Department of Otorhinolaryngology, Kuopio University Hospital, Kuopio, Finland
- Department of Clinical Medicine, University of Eastern Finland, Kuopio, Finland
| |
Collapse
|
42
|
Wächtler M, Sandmann P, Meister H. The Right-Ear Advantage in Static and Dynamic Cocktail-Party Situations. Trends Hear 2024; 28:23312165231215916. [PMID: 38284359 PMCID: PMC10826403 DOI: 10.1177/23312165231215916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 10/14/2023] [Accepted: 11/06/2023] [Indexed: 01/30/2024] Open
Abstract
When presenting two competing speech stimuli, one to each ear, a right-ear advantage (REA) can often be observed, reflected in better speech recognition compared to the left ear. Considering the left-hemispheric dominance for language, the REA has been explained by superior contralateral pathways (structural models) and language-induced shifts of attention to the right (attentional models). There is some evidence that the REA becomes more pronounced, as cognitive load increases. Hence, it is interesting to investigate the REA in static (constant target talker) and dynamic (target changing pseudo-randomly) cocktail-party situations, as the latter is associated with a higher cognitive load than the former. Furthermore, previous research suggests an increasing REA, when listening becomes more perceptually challenging. The present study examined the REA by using virtual acoustics to simulate static and dynamic cocktail-party situations, with three spatially separated talkers uttering concurrent matrix sentences. Sentences were presented at low sound pressure levels or processed with a noise vocoder to increase perceptual load. Sixteen young normal-hearing adults participated in the study. The REA was assessed by means of word recognition scores and a detailed error analysis. Word recognition revealed a greater REA for the dynamic than for the static situations, compatible with the view that an increase in cognitive load results in a heightened REA. Also, the REA depended on the type of perceptual load, as indicated by a higher REA associated with vocoded compared to low-level stimuli. The results of the error analysis support both structural and attentional models of the REA.
Collapse
Affiliation(s)
- Moritz Wächtler
- Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany
- Jean-Uhrmacher-Institute for Clinical ENT-Research,University of Cologne, Cologne, Germany
| | - Pascale Sandmann
- Cluster of Excellence ‘Hearing4all’, University of Oldenburg, Oldenburg, Germany
| | - Hartmut Meister
- Faculty of Medicine and University Hospital Cologne, Department of Otorhinolaryngology, Head and Neck Surgery, University of Cologne, Cologne, Germany
- Jean-Uhrmacher-Institute for Clinical ENT-Research,University of Cologne, Cologne, Germany
| |
Collapse
|
43
|
Liu J, Stohl J, Lopez-Poveda EA, Overath T. Quantifying the Impact of Auditory Deafferentation on Speech Perception. Trends Hear 2024; 28:23312165241227818. [PMID: 38291713 PMCID: PMC10832414 DOI: 10.1177/23312165241227818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Revised: 12/22/2023] [Accepted: 01/05/2024] [Indexed: 02/01/2024] Open
Abstract
The past decade has seen a wealth of research dedicated to determining which and how morphological changes in the auditory periphery contribute to people experiencing hearing difficulties in noise despite having clinically normal audiometric thresholds in quiet. Evidence from animal studies suggests that cochlear synaptopathy in the inner ear might lead to auditory nerve deafferentation, resulting in impoverished signal transmission to the brain. Here, we quantify the likely perceptual consequences of auditory deafferentation in humans via a physiologically inspired encoding-decoding model. The encoding stage simulates the processing of an acoustic input stimulus (e.g., speech) at the auditory periphery, while the decoding stage is trained to optimally regenerate the input stimulus from the simulated auditory nerve firing data. This allowed us to quantify the effect of different degrees of auditory deafferentation by measuring the extent to which the decoded signal supported the identification of speech in quiet and in noise. In a series of experiments, speech perception thresholds in quiet and in noise increased (worsened) significantly as a function of the degree of auditory deafferentation for modeled deafferentation greater than 90%. Importantly, this effect was significantly stronger in a noisy than in a quiet background. The encoding-decoding model thus captured the hallmark symptom of degraded speech perception in noise together with normal speech perception in quiet. As such, the model might function as a quantitative guide to evaluating the degree of auditory deafferentation in human listeners.
Collapse
Affiliation(s)
- Jiayue Liu
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Joshua Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, NC, USA
| | - Enrique A. Lopez-Poveda
- Instituto de Neurociencias de Castilla y Leon, University of Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, University of Salamanca, Salamanca, Spain
- Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| |
Collapse
|
44
|
YUKSEL M, KAYA SN. Speech Perception as a Function of the Number of Channels and Channel Interaction in Cochlear Implant Simulation. Medeni Med J 2023; 38:276-283. [PMID: 38148725 PMCID: PMC10759942 DOI: 10.4274/mmj.galenos.2023.73454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/12/2023] [Indexed: 12/28/2023] Open
Abstract
Objective Speech perception relies on precise spectral and temporal cues. However, cochlear implant (CI) processing is confined to a limited frequency range, affecting the information transmitted to the auditory system. This study analyzes the influence of channel interaction and the number of channels on word recognition scores (WRS) within the CI simulation framework. Methods Two distinct experiments were conducted. The first experiment (n=29, average age =23 years, 14 females) evaluated the number of channels using eight, twelve, sixteen, and 22 channel vocoded and nonvocoded word lists for WRS assessment. The second experiment (n=29, average age =25 years, 16 females) explored channel interaction across low, middle, and high-interaction conditions. Results In the first experiment, participants scored 57.93%, 80.97%, 83.59%, 91.03%, and 95.45% under 8, 12, 16, and 22-channel vocoder and non-vocoder conditions, respectively. The number of vocoder channels significantly affected WRS, with significant differences observed in all conditions except between the 12-channel and 16-channels (p<0.01). In the second experiment, the participants scored 2.2%, 20.6%, and 50.6% under high, mid, and low interaction conditions, respectively. Statistically significant differences were observed across all channel interaction conditions (p<0.01). Conclusions While the number of channels had a notable impact on WRS, it is essential to note that certain conditions (12 vs. 16) did not yield statistically significant differences. The observed differences in WRS were eclipsed by the pronounced effects of channel interaction. Notably, all conditions in the channel interaction experiment exhibited statistically significant differences. These findings underscore the paramount importance of prioritizing channel interaction in signal processing and CI fitting.
Collapse
Affiliation(s)
- Mustafa YUKSEL
- Ankara Medipol University Faculty of Health Sciences, Department of Audiology, Ankara, Turkey
| | - Sultan Nur KAYA
- Ankara Medipol University Faculty of Health Sciences, Department of Audiology, Ankara, Turkey
| |
Collapse
|
45
|
Lankinen K, Ahveninen J, Uluç I, Daneshzand M, Mareyam A, Kirsch JE, Polimeni JR, Healy BC, Tian Q, Khan S, Nummenmaa A, Wang QM, Green JR, Kimberley TJ, Li S. Role of articulatory motor networks in perceptual categorization of speech signals: a 7T fMRI study. Cereb Cortex 2023; 33:11517-11525. [PMID: 37851854 PMCID: PMC10724868 DOI: 10.1093/cercor/bhad384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Revised: 09/28/2023] [Accepted: 09/29/2023] [Indexed: 10/20/2023] Open
Abstract
Speech and language processing involve complex interactions between cortical areas necessary for articulatory movements and auditory perception and a range of areas through which these are connected and interact. Despite their fundamental importance, the precise mechanisms underlying these processes are not fully elucidated. We measured BOLD signals from normal hearing participants using high-field 7 Tesla fMRI with 1-mm isotropic voxel resolution. The subjects performed 2 speech perception tasks (discrimination and classification) and a speech production task during the scan. By employing univariate and multivariate pattern analyses, we identified the neural signatures associated with speech production and perception. The left precentral, premotor, and inferior frontal cortex regions showed significant activations that correlated with phoneme category variability during perceptual discrimination tasks. In addition, the perceived sound categories could be decoded from signals in a region of interest defined based on activation related to production task. The results support the hypothesis that articulatory motor networks in the left hemisphere, typically associated with speech production, may also play a critical role in the perceptual categorization of syllables. The study provides valuable insights into the intricate neural mechanisms that underlie speech processing.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Mohammad Daneshzand
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Azma Mareyam
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
| | - John E Kirsch
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Brian C Healy
- Partners Multiple Sclerosis Center, Brigham and Women's Hospital, Boston, MA 02115, United States
- Department of Neurology, Harvard Medical School, Boston, MA 02115, United States
- Biostatistics Center, Massachusetts General Hospital, Boston, MA 02114, United States
| | - Qiyuan Tian
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| | - Qing Mei Wang
- Stroke Biological Recovery Laboratory, Spaulding Rehabilitation Hospital, The Teaching Affiliate of Harvard Medical School, Charlestown, MA 02129, United States
| | - Jordan R Green
- Department of Communication Sciences and Disorders, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Teresa J Kimberley
- Department of Physical Therapy, School of Health and Rehabilitation Sciences, MGH Institute of Health Professions, Boston, MA 02129, United States
| | - Shasha Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA 02129, United States
- Harvard Medical School, Boston, MA 02115, United States
| |
Collapse
|
46
|
Fredriksson S, Li H, Söderberg M, Gyllensten K, Widén S, Persson Waye K. Occupational noise exposure, noise annoyance, hearing-related symptoms, and emotional exhaustion - a participatory-based intervention study in preschool and obstetrics care. Arch Environ Occup Health 2023; 78:423-434. [PMID: 38018749 DOI: 10.1080/19338244.2023.2283010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Accepted: 11/08/2023] [Indexed: 11/30/2023]
Abstract
A participatory-based intervention was performed in Sweden, aimed at improving the sound environment in one preschool (n = 20) and one obstetric ward (n = 50), with two controls each (n = 28, n = 66). Measured sound levels, and surveys of noise annoyance, hearing-related symptoms and emotional exhaustion were collected before, and three and nine months after the interventions, comparing intervention and control groups over time. The results of this first implementation in a limited number of workplaces showed significantly worsening of hyperacusis, sound-induced auditory fatigue, emotional exhaustion and increased sound levels in the preschool, and worsening of noise annoyance in both intervention groups. Increased risk awareness, limited implementation support and lack of psychosocial interventions may explain the worsening in outcomes, as might the worse baseline in the intervention groups. The complexity of the demands in human-service workplaces calls for further intervention studies.
Collapse
Affiliation(s)
- Sofie Fredriksson
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Region Västra Götaland, Habilitation & Health, Hearing Organization, Gothenburg, Sweden
| | - Huiqi Li
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Mia Söderberg
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| | - Kristina Gyllensten
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
- Occupational and Environmental Medicine, Sahlgrenska University Hospital, Gothenburg, Sweden
| | - Stephen Widén
- School of Health and Medical Sciences, Örebro University, Örebro, Sweden
| | - Kerstin Persson Waye
- School of Public Health and Community Medicine, Institute of Medicine, Sahlgrenska Academy, University of Gothenburg, Gothenburg, Sweden
| |
Collapse
|
47
|
Häußler SM, Stankow E, Knopke S, Szczepek AJ, Olze H. Sustained Cognitive Improvement in Patients over 65 Two Years after Cochlear Implantation. Brain Sci 2023; 13:1673. [PMID: 38137121 PMCID: PMC10741742 DOI: 10.3390/brainsci13121673] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 11/21/2023] [Accepted: 12/01/2023] [Indexed: 12/24/2023] Open
Abstract
This study aimed to evaluate the long-term benefits of cochlear implantation (CI) on cognitive performance, speech perception, and psychological status in post-lingually deafened patients older than 65 (n = 33). Patients were consecutively enrolled in this prospective study and assessed before, one year after, and two years after CI for speech perception, depressive symptoms, perceived stress, and working memory and processing speed. The Wechsler Adult Intelligence Scale (WAIS) was used for the latter. Thirty-three patients (fourteen men and nineteen women) were included. The scores indicating "hearing in quiet" and "hearing with background noise" improved significantly one year after CI and remained so two years after CI. The sound localization scores improved two years after CI. The depressive symptoms and perceived stress scores were low at the study's onset and remained unchanged. Working memory improved significantly two years after CI, while processing speed improved significantly one year after CI and was maintained after that. The improvement in working memory and processing speed two years after CI suggests there is a sustained positive effect of auditory rehabilitation with CI on cognitive abilities.
Collapse
Affiliation(s)
- Sophia Marie Häußler
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität and Berlin Humboldt Universität zu Berlin, Charitéplatz 1, 10117 Berlin, Germany; (S.M.H.); (E.S.); (S.K.); (A.J.S.)
- Department of Otorhinolaryngology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Elisabeth Stankow
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität and Berlin Humboldt Universität zu Berlin, Charitéplatz 1, 10117 Berlin, Germany; (S.M.H.); (E.S.); (S.K.); (A.J.S.)
| | - Steffen Knopke
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität and Berlin Humboldt Universität zu Berlin, Charitéplatz 1, 10117 Berlin, Germany; (S.M.H.); (E.S.); (S.K.); (A.J.S.)
| | - Agnieszka J. Szczepek
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität and Berlin Humboldt Universität zu Berlin, Charitéplatz 1, 10117 Berlin, Germany; (S.M.H.); (E.S.); (S.K.); (A.J.S.)
| | - Heidi Olze
- Department of Otorhinolaryngology, Head and Neck Surgery, Charité–Universitätsmedizin Berlin, Corporate Member of Freie Universität and Berlin Humboldt Universität zu Berlin, Charitéplatz 1, 10117 Berlin, Germany; (S.M.H.); (E.S.); (S.K.); (A.J.S.)
| |
Collapse
|
48
|
Lewis D, Al-Salim S, McDermott T, Dergan A, McCreery RW. Impact of room acoustics and visual cues on speech perception and talker localization by children with mild bilateral or unilateral hearing loss. Front Pediatr 2023; 11:1252452. [PMID: 38078311 PMCID: PMC10703386 DOI: 10.3389/fped.2023.1252452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 10/30/2023] [Indexed: 02/12/2024] Open
Abstract
Introduction This study evaluated the ability of children (8-12 years) with mild bilateral or unilateral hearing loss (MBHL/UHL) listening unaided, or normal hearing (NH) to locate and understand talkers in varying auditory/visual acoustic environments. Potential differences across hearing status were examined. Methods Participants heard sentences presented by female talkers from five surrounding locations in varying acoustic environments. A localization-only task included two conditions (auditory only, visually guided auditory) in three acoustic environments (favorable, typical, poor). Participants were asked to locate each talker. A speech perception task included four conditions [auditory-only, visually guided auditory, audiovisual, auditory-only from 0° azimuth (baseline)] in a single acoustic environment. Participants were asked to locate talkers, then repeat what was said. Results In the localization-only task, participants were better able to locate talkers and looking times were shorter with visual guidance to talker location. Correct looking was poorest and looking times longest in the poor acoustic environment. There were no significant effects of hearing status/age. In the speech perception task, performance was highest in the audiovisual condition and was better in the visually guided and auditory-only conditions than in the baseline condition. Although audiovisual performance was best overall, children with MBHL or UHL performed more poorly than peers with NH. Better-ear pure-tone averages for children with MBHL had a greater effect on keyword understanding than did poorer-ear pure-tone averages for children with UHL. Conclusion Although children could locate talkers more easily and quickly with visual information, finding locations alone did not improve speech perception. Best speech perception occurred in the audiovisual condition; however, poorer performance by children with MBHL or UHL suggested that being able to see talkers did not overcome reduced auditory access. Children with UHL exhibited better speech perception than children with MBHL, supporting benefits of NH in at least one ear.
Collapse
Affiliation(s)
- Dawna Lewis
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
- Auditory Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Sarah Al-Salim
- Clinical Measurement Program, Boys Town National Research Hospital, Omaha, NE, United States
| | - Tessa McDermott
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Andrew Dergan
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Ryan W. McCreery
- Auditory Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| |
Collapse
|
49
|
Goh HL, Woon FT, Moisik SR, Styles SJ. Contrastive Alveolar/Retroflex Phonemes in Singapore Mandarin Bilinguals: Comprehension Rates for Articulations in Different Accents, and Acoustic Analysis of Productions. Lang Speech 2023:238309231205012. [PMID: 37947265 DOI: 10.1177/00238309231205012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/12/2023]
Abstract
The standard Beijing variety of Mandarin has a clear alveolar-retroflex contrast for phonemes featuring voiceless sibilant frication (i.e., /s/, /ʂ/, /ʈs/, /ʈʂ/, /ʈsʰ/, /ʈʂʰ/). However, some studies show that varieties in the 'outer circle', such in Taiwan, have a reduced contrast for these speech sounds via a process known as 'deretroflexion'. The variety of Mandarin spoken in Singapore is also considered as 'outer circle', as it exhibits influences from Min Nan varieties. We investigated how bilinguals of Singapore Mandarin and English perceive and produce speech tokens in minimal pairs differing only in the alveolar/retroflex place of articulation. In all, 50 participants took part in two tasks. In Task 1, participants performed a lexical identification task for minimal pairs differing only the alveolar/retroflex place of articulation, as spoken by native speakers of two varieties: Beijing Mandarin and Singapore Mandarin. No difference in comprehension of the words was observed between the two varieties indicating that both varieties contain sufficient acoustic information for discrimination. In Task 2, participants read aloud from the list of minimal pairs while their voices were recorded. Acoustic analysis revealed that the phonemes do indeed differ acoustically in terms of center of gravity of the frication and in an alternative measure: long-term averaged spectra. The magnitude of this difference appears to be smaller than previously reported differences for the Beijing variety. These findings show that although some deretroflexion is evident in the speech of bilinguals of the Singaporean variety of Mandarin, it does not translate to ambiguity in the speech signal.
Collapse
Affiliation(s)
- Hannah L Goh
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
- Interdisciplinary Graduate Programme, Nanyang Technological University, Singapore
| | - Fei Ting Woon
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
| | - Scott R Moisik
- Linguistics and Multilingual Studies, School of Humanities, Nanyang Technological University, Singapore
| | - Suzy J Styles
- Psychology, School of Social Sciences, Nanyang Technological University, Singapore
- Centre for Research and Development in Learning, Nanyang Technological University, Singapore
| |
Collapse
|
50
|
Eqlimi E, Bockstael A, Schönwiesner M, Talsma D, Botteldooren D. Time course of EEG complexity reflects attentional engagement during listening to speech in noise. Eur J Neurosci 2023; 58:4043-4069. [PMID: 37814423 DOI: 10.1111/ejn.16159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 08/31/2023] [Accepted: 09/13/2023] [Indexed: 10/11/2023]
Abstract
Auditory distractions are recognized to considerably challenge the quality of information encoding during speech comprehension. This study explores electroencephalography (EEG) microstate dynamics in ecologically valid, noisy settings, aiming to uncover how these auditory distractions influence the process of information encoding during speech comprehension. We examined three listening scenarios: (1) speech perception with background noise (LA), (2) focused attention on the background noise (BA), and (3) intentional disregard of the background noise (BUA). Our findings showed that microstate complexity and unpredictability increased when attention was directed towards speech compared with tasks without speech (LA > BA & BUA). Notably, the time elapsed between the recurrence of microstates increased significantly in LA compared with both BA and BUA. This suggests that coping with background noise during speech comprehension demands more sustained cognitive effort. Additionally, a two-stage time course for both microstate complexity and alpha-to-theta power ratio was observed. Specifically, in the early epochs, a lower level was observed, which gradually increased and eventually reached a steady level in the later epochs. The findings suggest that the initial stage is primarily driven by sensory processes and information gathering, while the second stage involves higher level cognitive engagement, including mnemonic binding and memory encoding.
Collapse
Affiliation(s)
- Ehsan Eqlimi
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | - Annelies Bockstael
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| | | | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| | - Dick Botteldooren
- WAVES Research Group, Department of Information Technology, Ghent University, Ghent, Belgium
| |
Collapse
|