1
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024; 45:969-984. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
2
|
Amini AE, Naples JG, Cortina L, Hwa T, Morcos M, Castellanos I, Moberly AC. A Scoping Review and Meta-Analysis of the Relations Between Cognition and Cochlear Implant Outcomes and the Effect of Quiet Versus Noise Testing Conditions. Ear Hear 2024:00003446-990000000-00304. [PMID: 38953851 DOI: 10.1097/aud.0000000000001527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/04/2024]
Abstract
OBJECTIVES Evidence continues to emerge of associations between cochlear implant (CI) outcomes and cognitive functions in postlingually deafened adults. While there are multiple factors that appear to affect these associations, the impact of speech recognition background testing conditions (i.e., in quiet versus noise) has not been systematically explored. The two aims of this study were to (1) identify associations between speech recognition following cochlear implantation and performance on cognitive tasks, and to (2) investigate the impact of speech testing in quiet versus noise on these associations. Ultimately, we want to understand the conditions that impact this complex relationship between CI outcomes and cognition. DESIGN A scoping review following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines was performed on published literature evaluating the relation between outcomes of cochlear implantation and cognition. The current review evaluates 39 papers that reported associations between over 30 cognitive assessments and speech recognition tests in adult patients with CIs. Six cognitive domains were evaluated: Global Cognition, Inhibition-Concentration, Memory and Learning, Controlled Fluency, Verbal Fluency, and Visuospatial Organization. Meta-analysis was conducted on three cognitive assessments among 12 studies to evaluate relations with speech recognition outcomes. Subgroup analyses were performed to identify whether speech recognition testing in quiet versus in background noise impacted its association with cognitive performance. RESULTS Significant associations between cognition and speech recognition in a background of quiet or noise were found in 69% of studies. Tests of Global Cognition and Inhibition-Concentration skills resulted in the highest overall frequency of significant associations with speech recognition (45% and 57%, respectively). Despite the modest proportion of significant associations reported, pooling effect sizes across samples through meta-analysis revealed a moderate positive correlation between tests of Global Cognition (r = +0.37, p < 0.01) as well as Verbal Fluency (r = +0.44, p < 0.01) and postoperative speech recognition skills. Tests of Memory and Learning are most frequently utilized in the setting of CI (in 26 of 39 included studies), yet meta-analysis revealed nonsignificant associations with speech recognition performance in a background of quiet (r = +0.30, p = 0.18), and noise (r = -0.06, p = 0.78). CONCLUSIONS Background conditions of speech recognition testing may influence the relation between speech recognition outcomes and cognition. The magnitude of this effect of testing conditions on this relationship appears to vary depending on the cognitive construct being assessed. Overall, Global Cognition and Inhibition-Concentration skills are potentially useful in explaining speech recognition skills following cochlear implantation. Future work should continue to evaluate these relations to appropriately unify cognitive testing opportunities in the setting of cochlear implantation.
Collapse
Affiliation(s)
- Andrew E Amini
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - James G Naples
- Division of Otolaryngology-Head and Neck Surgery, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, Massachusetts, USA
- These authors contributed equally to this work
| | - Luis Cortina
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Tiffany Hwa
- Division of Otology, Neurotology, & Lateral Skull Base Surgery, Department of Otolaryngology-Head and Neck Surgery, Hospital of the University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Mary Morcos
- Department of Otolaryngology Head and Neck Surgery, Harvard Medical School, Boston, Massachusetts, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Serrao DS, Theruvan N, Fathima H, Pitchaimuthu AN. Contribution of Temporal Fine Structure Cues to Concurrent Vowel Identification and Perception of Zebra Speech. Int Arch Otorhinolaryngol 2024; 28:e492-e501. [PMID: 38974629 PMCID: PMC11226255 DOI: 10.1055/s-0044-1785456] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2022] [Accepted: 01/16/2024] [Indexed: 07/09/2024] Open
Abstract
Introduction The limited access to temporal fine structure (TFS) cues is a reason for reduced speech-in-noise recognition in cochlear implant (CI) users. The CI signal processing schemes like electroacoustic stimulation (EAS) and fine structure processing (FSP) encode TFS in the low frequency whereas theoretical strategies such as frequency amplitude modulation encoder (FAME) encode TFS in all the bands. Objective The present study compared the effect of simulated CI signal processing schemes that either encode no TFS, TFS information in all bands, or TFS only in low-frequency bands on concurrent vowel identification (CVI) and Zebra speech perception (ZSP). Methods Temporal fine structure information was systematically manipulated using a 30-band sine-wave (SV) vocoder. The TFS was either absent (SV) or presented in all the bands as frequency modulations simulating the FAME algorithm or only in bands below 525 Hz to simulate EAS. Concurrent vowel identification and ZSP were measured under each condition in 15 adults with normal hearing. Results The CVI scores did not differ between the 3 schemes (F (2, 28) = 0.62, p = 0.55, η 2 p = 0.04). The effect of encoding TFS was observed for ZSP (F (2, 28) = 5.73, p = 0.008, η 2 p = 0.29). Perception of Zebra speech was significantly better with EAS and FAME than with SV. There was no significant difference in ZSP scores obtained with EAS and FAME ( p = 1.00) Conclusion For ZSP, the TFS cues from FAME and EAS resulted in equivalent improvements in performance compared to the SV scheme. The presence or absence of TFS did not affect the CVI scores.
Collapse
Affiliation(s)
| | | | - Hasna Fathima
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
- Department of Audiology and Speech Language Pathology, National Institute of Speech and Hearing, Trivandrum, Kerala, India
| | - Arivudai Nambi Pitchaimuthu
- Department of Audiology and Speech-Language Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, Karnataka, India
- Department of Audiology, Centre for Hearing Science, All India Institute of Speech & Hearing, Mysuru, India
| |
Collapse
|
4
|
Xie Z, Gaskins CR, Tinnemore AR, Shader MJ, Gordon-Salant S, Anderson S, Goupell MJ. Spectral degradation and carrier sentences increase age-related temporal processing deficits in a cue-specific manner. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3983-3994. [PMID: 38934563 PMCID: PMC11213620 DOI: 10.1121/10.0026434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/03/2024] [Revised: 05/09/2024] [Accepted: 05/25/2024] [Indexed: 06/28/2024]
Abstract
Advancing age is associated with decreased sensitivity to temporal cues in word segments, particularly when target words follow non-informative carrier sentences or are spectrally degraded (e.g., vocoded to simulate cochlear-implant stimulation). This study investigated whether age, carrier sentences, and spectral degradation interacted to cause undue difficulty in processing speech temporal cues. Younger and older adults with normal hearing performed phonemic categorization tasks on two continua: a Buy/Pie contrast with voice onset time changes for the word-initial stop and a Dish/Ditch contrast with silent interval changes preceding the word-final fricative. Target words were presented in isolation or after non-informative carrier sentences, and were unprocessed or degraded via sinewave vocoding (2, 4, and 8 channels). Older listeners exhibited reduced sensitivity to both temporal cues compared to younger listeners. For the Buy/Pie contrast, age, carrier sentence, and spectral degradation interacted such that the largest age effects were seen for unprocessed words in the carrier sentence condition. This pattern differed from the Dish/Ditch contrast, where reducing spectral resolution exaggerated age effects, but introducing carrier sentences largely left the patterns unchanged. These results suggest that certain temporal cues are particularly susceptible to aging when placed in sentences, likely contributing to the difficulties of older cochlear-implant users in everyday environments.
Collapse
Affiliation(s)
- Zilong Xie
- School of Communication Science and Disorders, Florida State University, Tallahassee, Florida 32306, USA
| | - Casey R Gaskins
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Anna R Tinnemore
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Maureen J Shader
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana 47907, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
5
|
Gaultier C, Goehring T. Recovering speech intelligibility with deep learning and multiple microphones in noisy-reverberant situations for people using cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3833-3847. [PMID: 38884525 DOI: 10.1121/10.0026218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Accepted: 05/10/2024] [Indexed: 06/18/2024]
Abstract
For cochlear implant (CI) listeners, holding a conversation in noisy and reverberant environments is often challenging. Deep-learning algorithms can potentially mitigate these difficulties by enhancing speech in everyday listening environments. This study compared several deep-learning algorithms with access to one, two unilateral, or six bilateral microphones that were trained to recover speech signals by jointly removing noise and reverberation. The noisy-reverberant speech and an ideal noise reduction algorithm served as lower and upper references, respectively. Objective signal metrics were compared with results from two listening tests, including 15 typical hearing listeners with CI simulations and 12 CI listeners. Large and statistically significant improvements in speech reception thresholds of 7.4 and 10.3 dB were found for the multi-microphone algorithms. For the single-microphone algorithm, there was an improvement of 2.3 dB but only for the CI listener group. The objective signal metrics correctly predicted the rank order of results for CI listeners, and there was an overall agreement for most effects and variances between results for CI simulations and CI listeners. These algorithms hold promise to improve speech intelligibility for CI listeners in environments with noise and reverberation and benefit from a boost in performance when using features extracted from multiple microphones.
Collapse
Affiliation(s)
- Clément Gaultier
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| | - Tobias Goehring
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, United Kingdom
| |
Collapse
|
6
|
Li S, Wang Y, Yu Q, Feng Y, Tang P. The Effect of Visual Articulatory Cues on the Identification of Mandarin Tones by Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-9. [PMID: 38768072 DOI: 10.1044/2024_jslhr-23-00559] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/22/2024]
Abstract
PURPOSE This study explored the facilitatory effect of visual articulatory cues on the identification of Mandarin lexical tones by children with cochlear implants (CIs) in both quiet and noisy environments. It also explored whether early implantation is associated with better use of visual cues in tonal identification. METHOD Participants included 106 children with CIs and 100 normal-hearing (NH) controls. A tonal identification task was employed using a two-alternative forced-choice picture-pointing paradigm. Participants' tonal identification accuracies were compared between audio-only (AO) and audiovisual (AV) modalities. Correlations between implantation ages and visual benefits (accuracy differences between AO and AV modalities) were also examined. RESULTS Children with CIs demonstrated an improved identification accuracy from AO to AV modalities in the noisy environment. Additionally, earlier implantation was significantly correlated with a greater visual benefit in noise. CONCLUSIONS These findings indicated that children with CIs benefited from visual cues on tonal identification in noise, and early implantation enhanced the visual benefit. These results thus have practical implications on tonal perception interventions for Mandarin-speaking children with CIs.
Collapse
Affiliation(s)
- Shanpeng Li
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Yinuo Wang
- Department of English, Linguistics and Theatre Studies, Faculty of Arts & Social Sciences, National University of Singapore
| | - Qianxi Yu
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Yan Feng
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Ping Tang
- MIIT Key Lab for Language Information Processing and Applications, School of Foreign Studies, Nanjing University of Science and Technology, China
| |
Collapse
|
7
|
Mallikarjun A, Shroads E, Newman RS. Perception of vocoded speech in domestic dogs. Anim Cogn 2024; 27:34. [PMID: 38625429 PMCID: PMC11021312 DOI: 10.1007/s10071-024-01869-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 03/21/2024] [Accepted: 03/24/2024] [Indexed: 04/17/2024]
Abstract
Humans have an impressive ability to comprehend signal-degraded speech; however, the extent to which comprehension of degraded speech relies on human-specific features of speech perception vs. more general cognitive processes is unknown. Since dogs live alongside humans and regularly hear speech, they can be used as a model to differentiate between these possibilities. One often-studied type of degraded speech is noise-vocoded speech (sometimes thought of as cochlear-implant-simulation speech). Noise-vocoded speech is made by dividing the speech signal into frequency bands (channels), identifying the amplitude envelope of each individual band, and then using these envelopes to modulate bands of noise centered over the same frequency regions - the result is a signal with preserved temporal cues, but vastly reduced frequency information. Here, we tested dogs' recognition of familiar words produced in 16-channel vocoded speech. In the first study, dogs heard their names and unfamiliar dogs' names (foils) in vocoded speech as well as natural speech. In the second study, dogs heard 16-channel vocoded speech only. Dogs listened longer to their vocoded name than vocoded foils in both experiments, showing that they can comprehend a 16-channel vocoded version of their name without prior exposure to vocoded speech, and without immediate exposure to the natural-speech version of their name. Dogs' name recognition in the second study was mediated by the number of phonemes in the dogs' name, suggesting that phonological context plays a role in degraded speech comprehension.
Collapse
Affiliation(s)
- Amritha Mallikarjun
- Penn Vet Working Dog Center, University of Pennsylvania School of Veterinary Medicine, Philadelphia, USA.
| | - Emily Shroads
- Department of Hearing and Speech Sciences, University of Maryland, College Park, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, USA
| |
Collapse
|
8
|
Yüksel MB, Atik AC, Külah H. Piezoelectric Multi-Channel Bilayer Transducer for Sensing and Filtering Ossicular Vibration. ADVANCED SCIENCE (WEINHEIM, BADEN-WURTTEMBERG, GERMANY) 2024; 11:e2308277. [PMID: 38380504 DOI: 10.1002/advs.202308277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 01/10/2024] [Indexed: 02/22/2024]
Abstract
This paper presents an acoustic transducer for fully implantable cochlear implants (FICIs), which can be implanted on the hearing chain to detect and filter the ambient sound in eight frequency bands between 250 and 6000 Hz. The transducer dimensions are conventional surgery compatible. The structure is formed with 3 × 3 × 0.36 mm active space for each layer and 5.2 mg total active mass excluding packaging. Characterization of the transducer is carried on an artificial membrane whose vibration characteristic is similar to the umbo vibration. On the artificial membrane, piezoelectric transducer generates up to 320.3 mVpp under 100 dB sound pressure level (SPL) excitation and covers the audible acoustic frequency. The measured signal-to-noise-ratio (SNR) of the channels is up to 84.2 dB. Sound quality of the transducer for fully implantable cochlear implant application is graded with an objective qualification method (PESQ) for the first time in the literature to the best of the knowledge, and scored 3.42/4.5.
Collapse
Affiliation(s)
- Muhammed Berat Yüksel
- Department of Electrical and Electronics Engineering, Middle East Technical University (METU), Universiteler Mah. Dumlipinar Blv. No:1, Ankara, 06800, Turkey
- METU MEMS Center, Mustafa Kemal Mah, Dumlupınar Bulvarı No: 280, Ankara, 06350, Turkey
| | - Ali Can Atik
- Department of Electrical and Electronics Engineering, Middle East Technical University (METU), Universiteler Mah. Dumlipinar Blv. No:1, Ankara, 06800, Turkey
- METU MEMS Center, Mustafa Kemal Mah, Dumlupınar Bulvarı No: 280, Ankara, 06350, Turkey
| | - Haluk Külah
- Department of Electrical and Electronics Engineering, Middle East Technical University (METU), Universiteler Mah. Dumlipinar Blv. No:1, Ankara, 06800, Turkey
- METU MEMS Center, Mustafa Kemal Mah, Dumlupınar Bulvarı No: 280, Ankara, 06350, Turkey
| |
Collapse
|
9
|
Yüksel M, Çiprut A. Reduced Channel Interaction Improves Timbre Recognition Under Vocoder Simulation of Cochlear Implant Processing. Otol Neurotol 2024; 45:e297-e306. [PMID: 38437807 DOI: 10.1097/mao.0000000000004151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE This study aimed to investigate the influence of the number of channels and channel interaction on timbre perception in cochlear implant (CI) processing. By utilizing vocoder simulations of CI processing, the effects of different numbers of channels and channel interaction were examined to assess their impact on timbre perception, an essential aspect of music and auditory performance. STUDY DESIGN, SETTING, AND PATIENTS Fourteen CI recipients, with at least 1 year of CI device use, and two groups (N = 16 and N = 19) of normal hearing (NH) participants completed a timbre recognition (TR) task. NH participants were divided into two groups, with each group being tested on different aspects of the study. The first group underwent testing with varying numbers of channels (8, 12, 16, and 20) to determine an ideal number that closely reflected the TR performance of CI recipients. Subsequently, the second group of NH participants participated in the assessment of channel interaction, utilizing the identified ideal number of 20 channels, with three conditions: low interaction (54 dB/octave), medium interaction (24 dB/octave), and high interaction (12 dB/octave). Statistical analyses, including repeated-measures analysis of variance and pairwise comparisons, were conducted to examine the effects. RESULTS The number of channels did not demonstrate a statistically significant effect on TR in NH participants ( p > 0.05). However, it was observed that the condition with 20 channels closely resembled the TR performance of CI recipients. In contrast, channel interaction exhibited a significant effect ( p < 0.001) on TR. Both the low interaction (54 dB/octave) and high interaction (12 dB/octave) conditions differed significantly from the actual CI recipients' performance. CONCLUSION Timbre perception, a complex ability reliant on highly detailed spectral resolution, was not significantly influenced by the number of channels. However, channel interaction emerged as a significant factor affecting timbre perception. The differences observed under different channel interaction conditions suggest potential mechanisms, including reduced spectro-temporal resolution and degraded spectral cues. These findings highlight the importance of considering channel interaction and optimizing CI processing strategies to enhance music perception and overall auditory performance for CI recipients.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Department of Audiology, Ankara Medipol University Faculty of Health Sciences, Ankara
| | - Ayça Çiprut
- Department of Audiology, Marmara University Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
10
|
Cychosz M, Winn MB, Goupell MJ. How to vocode: Using channel vocoders for cochlear-implant research. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2407-2437. [PMID: 38568143 PMCID: PMC10994674 DOI: 10.1121/10.0025274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/19/2023] [Accepted: 02/23/2024] [Indexed: 04/05/2024]
Abstract
The channel vocoder has become a useful tool to understand the impact of specific forms of auditory degradation-particularly the spectral and temporal degradation that reflect cochlear-implant processing. Vocoders have many parameters that allow researchers to answer questions about cochlear-implant processing in ways that overcome some logistical complications of controlling for factors in individual cochlear implant users. However, there is such a large variety in the implementation of vocoders that the term "vocoder" is not specific enough to describe the signal processing used in these experiments. Misunderstanding vocoder parameters can result in experimental confounds or unexpected stimulus distortions. This paper highlights the signal processing parameters that should be specified when describing vocoder construction. The paper also provides guidance on how to determine vocoder parameters within perception experiments, given the experimenter's goals and research questions, to avoid common signal processing mistakes. Throughout, we will assume that experimenters are interested in vocoders with the specific goal of better understanding cochlear implants.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, California 90095, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, College Park, Maryland 20742, USA
| |
Collapse
|
11
|
Redford MA. Speech perception as information processing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:R7-R8. [PMID: 38558083 DOI: 10.1121/10.0025396] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 12/21/2023] [Indexed: 04/04/2024]
Abstract
The Reflections series takes a look back on historical articles from The Journal of the Acoustical Society of America that have had a significant impact on the science and practice of acoustics.
Collapse
Affiliation(s)
- Melissa A Redford
- Department of Linguistics, University of Oregon, 1451 Onyx Street, Eugene, Oregon 97403-1290, USA
| |
Collapse
|
12
|
Guérit F, Middlebrooks JC, Gransier R, Richardson ML, Wouters J, Carlyon RP. Exploring the Use of Interleaved Stimuli to Measure Cochlear-Implant Excitation Patterns. J Assoc Res Otolaryngol 2024; 25:201-213. [PMID: 38459245 PMCID: PMC11018570 DOI: 10.1007/s10162-024-00937-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2023] [Accepted: 02/15/2024] [Indexed: 03/10/2024] Open
Abstract
PURPOSE Attempts to use current-focussing strategies with cochlear implants (CI) to reduce neural spread-of-excitation have met with only mixed success in human studies, in contrast to promising results in animal studies. Although this discrepancy could stem from between-species anatomical and aetiological differences, the masking experiments used in human studies may be insufficiently sensitive to differences in excitation-pattern width. METHODS We used an interleaved-masking method to measure psychophysical excitation patterns in seven participants with four masker stimulation configurations: monopolar (MP), partial tripolar (pTP), a wider partial tripolar (pTP + 2), and, importantly, a condition (RP + 2) designed to produce a broader excitation pattern than MP. The probe was always in partial-tripolar configuration. RESULTS We found a significant effect of stimulation configuration on both the amount of on-site masking (mask and probe on same electrode; an indirect indicator of sharpness) and the difference between off-site and on-site masking. Differences were driven solely by RP + 2 producing a broader excitation pattern than the other configurations, whereas monopolar and the two current-focussing configurations did not statistically differ from each other. CONCLUSION A method that is sensitive enough to reveal a modest broadening in RP + 2 showed no evidence for sharpening with focussed stimulation. We also showed that although voltage recordings from the implant accurately predicted a broadening of the psychophysical excitation patterns with RP + 2, they wrongly predicted a strong sharpening with pTP + 2. We additionally argue, based on our recent research, that the interleaved-masking method can usefully be applied to non-human species and objective measures of CI excitation patterns.
Collapse
Affiliation(s)
- François Guérit
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, England.
| | - John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Department of Neurobiology and Behavior, University of California at Irvine, Irvine, CA, USA
- Department of Biomedical Engineering, University of California at Irvine, Irvine, CA, USA
| | - Robin Gransier
- Department of Neurosciences, ExpORL KU Leuven, Leuven, Belgium
- Leuven Brain Institute KU Leuven, Leuven, Belgium
| | - Matthew L Richardson
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
| | - Jan Wouters
- Department of Neurosciences, ExpORL KU Leuven, Leuven, Belgium
- Leuven Brain Institute KU Leuven, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition & Brain Sciences Unit, University of Cambridge, Cambridge, England
| |
Collapse
|
13
|
Abari J, Tekin AM, Bahşi I, Topsakal V. More than 40 years of cochlear implant research: A bibliometric analysis. Cochlear Implants Int 2024:1-9. [PMID: 38512716 DOI: 10.1080/14670100.2024.2330793] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/23/2024]
Abstract
OBJECTIVES Cochlear implantation is the most effective treatment for patients with severe-to-profound sensorineural hearing loss. Much scientific work has been published since their inception. There is a need for a critical reflection on how and what we publish on cochlear implantation. METHODS All Science Citation Index Expanded featured articles between 1980 and 2022 with the word 'cochlear implants' or 'cochlear implantation' were collected from the Web of Science database. Separate characteristics, such as the publication dates, the journals, the number of citations, the countries of origin, the authors, the institutions and co-occurring keywords, were assessed. RESULTS 13,934 articles were included in the data analysis. The journals of of Otology and Neurotology, Ear and Hearing and of Pediatric Otorhinolaryngology represent the top three most publishing journals. Hannover Medical School, the University of Melbourne and the University of Northern Iowa represent the top three most publishing institutions. DISCUSSION The amount of scientific publications on cochlear implant technology has increased for the last 40 years. Besides the focus on speech perception, the research landscape on cochlear implantation is broad and diverse. The number of countries and institutions contributing to these publications is limited. CONCLUSION This bibliometric analysis serves as a quantitative overview of the research landscape on cochlear implantation.
Collapse
Affiliation(s)
- Jaouad Abari
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels, Belgium
| | - Ahmet M Tekin
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels, Belgium
| | - Ilhan Bahşi
- Department of Anatomy, Faculty of Medicine, Gaziantep University, Gaziantep, Turkey
| | - Vedat Topsakal
- Department of Otolaryngology and Head & Neck Surgery, Vrije Universiteit Brussel, University Hospital UZ Brussel, Brussels, Belgium
| |
Collapse
|
14
|
Landsberger DM, Long CJ, Kirk JR, Stupak N, Roland JT. Effect of Return Electrode Placement at Apical Cochleostomy on Current Flow With a Cochlear Implant. Ear Hear 2024; 45:511-516. [PMID: 38047764 DOI: 10.1097/aud.0000000000001439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
OBJECTIVES A method for stimulating the cochlear apex using perimodiolar electrode arrays is described. This method involves implanting an electrode (ECE1) into the helioctrema in addition to standard cochlear implant placement. One objective is to verify a suitable approach for implanting ECE1 in the helicotrema. Another is to determine how placement of ECE1 reshapes electric fields. DESIGN Two cadaveric half-heads were implanted, and electric voltage tomography was measured with ECE1 placed in many positions. RESULTS An approach for placing ECE1 was identified. Changes in electric fields were only observed when ECE1 was placed into the fluid in the helicotrema. When inside the helicotrema, electric voltage tomography modeling suggests an increased current flow toward the apex. CONCLUSIONS Placement of ECE1 into the cochlear apex is clinically feasible and has the potential to reshape electric fields to stimulate regions of the cochlea more apical than those represented by the electrode array.
Collapse
Affiliation(s)
- David M Landsberger
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, New York, USA
| | - Christopher J Long
- Advanced Innovation, Research and Technology Labs, Cochlear Ltd., Lone Tree, Colorado, USA
| | - Jonathon R Kirk
- Advanced Innovation, Research and Technology Labs, Cochlear Ltd., Lone Tree, Colorado, USA
| | - Natalia Stupak
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, New York, USA
| | - J Thomas Roland
- Department of Otolaryngology, New York University Grossman School of Medicine, New York, New York, USA
| |
Collapse
|
15
|
Tamati TN, Jebens A, Başkent D. Lexical effects on talker discrimination in adult cochlear implant usersa). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1631-1640. [PMID: 38426835 PMCID: PMC10908561 DOI: 10.1121/10.0025011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2023] [Revised: 02/06/2024] [Accepted: 02/07/2024] [Indexed: 03/02/2024]
Abstract
The lexical and phonological content of an utterance impacts the processing of talker-specific details in normal-hearing (NH) listeners. Adult cochlear implant (CI) users demonstrate difficulties in talker discrimination, particularly for same-gender talker pairs, which may alter the reliance on lexical information in talker discrimination. The current study examined the effect of lexical content on talker discrimination in 24 adult CI users. In a remote AX talker discrimination task, word pairs-produced either by the same talker (ST) or different talkers with the same (DT-SG) or mixed genders (DT-MG)-were either lexically easy (high frequency, low neighborhood density) or lexically hard (low frequency, high neighborhood density). The task was completed in quiet and multi-talker babble (MTB). Results showed an effect of lexical difficulty on talker discrimination, for same-gender talker pairs in both quiet and MTB. CI users showed greater sensitivity in quiet as well as less response bias in both quiet and MTB for lexically easy words compared to lexically hard words. These results suggest that CI users make use of lexical content in same-gender talker discrimination, providing evidence for the contribution of linguistic information to the processing of degraded talker information by adult CI users.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Vanderbilt University Medical Center, 1215 21st Ave S, Nashville, Tennessee 37232, USA
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Almut Jebens
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
16
|
Collins A, Foghsgaard S, Druce E, Margani V, Mejia O, O'Leary S. The Effect of Electrode Position on Behavioral and Electrophysiologic Measurements in Perimodiolar Cochlear Implants. Otol Neurotol 2024; 45:238-244. [PMID: 38238914 DOI: 10.1097/mao.0000000000004080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
BACKGROUND The shape and position of cochlear implant electrodes could potentially influence speech perception, as this determines the proximity of implant electrodes to the spiral ganglion. However, the literature to date reveals no consistent association between speech perception and either the proximity of electrode to the medial cochlear wall or the depth of insertion. These relationships were explored in a group of implant recipients receiving the same precurved electrode. METHODS This was a retrospective study of adults who underwent cochlear implantation with Cochlear Ltd.'s Slim Perimodiolar electrode at the Royal Victorian Eye and Ear Hospital between 2015 and 2018 (n = 52). Postoperative images were obtained using cone beam computed tomography (CBCT) and analyzed by multi-planar reconstruction to identify the position of the electrode contacts within the cochlea, including estimates of the proximity of the electrodes to the medial cochlear wall or modiolus and the angular depth of insertion. Consonant-vowel-consonant (CVC) monosyllabic phonemes were determined preoperatively, and at 3 and 12 months postoperatively. Electrically evoked compound action potential (ECAP) thresholds and impedance were measured from the implant array immediately after implantation. The relationships between electrode position and speech perception, electrode impedance, and ECAP threshold were an analyzed by Pearson correlation. RESULTS Age had a negative impact on speech perception at 3 months but not 12 months. None of the electrode-wide measures of proximity between electrode contacts and the modiolus, nor measures of proximity to the medial cochlear wall, nor the angular depth of insertion of the most apical electrode correlated with speech perception. However, there was a moderate correlation between speech perception and the position of the most basal electrode contacts; poorer speech perception was associated with a greater distance to the modiolus. ECAP thresholds were inversely related to the distance between electrode contacts and the modiolus, but there was no clear association between this distance and impedance. CONCLUSIONS Speech perception was significantly affected by the proximity of the most basal electrodes to the modiolus, suggesting that positioning of these electrodes may be important for optimizing speech perception. ECAP thresholds might provide an indication of this proximity, allowing for its optimization during surgery.
Collapse
Affiliation(s)
- Aaron Collins
- Department of Otolaryngology, The University of Melbourne, Melbourne, Australia
| | - Søren Foghsgaard
- Department of Otorhinolaryngology Head & Neck Surgery and Audiology, Rigshospitalet, University Hospital of Copenhagen, Copenhagen, Denmark
| | - Edgar Druce
- Department of Otolaryngology, The University of Melbourne, Melbourne, Australia
| | - Valerio Margani
- Department of Neuroscience, Mental Health, and Sense Organs (NEMOS), Sant' Andrea University Hospital, Sapienza University, Rome, Italy
| | - Olivia Mejia
- sENTro Head and Neck Clinic, Manila, Philippines
| | | |
Collapse
|
17
|
Abramowitz JC, Goupell MJ, Milvae KD. Cochlear-Implant Simulated Signal Degradation Exacerbates Listening Effort in Older Listeners. Ear Hear 2024; 45:441-450. [PMID: 37953469 PMCID: PMC10922081 DOI: 10.1097/aud.0000000000001440] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2023]
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) often report that listening requires high levels of effort. Listening effort can increase with decreasing spectral resolution, which occurs when listening with a CI, and can also increase with age. What is not clear is whether these factors interact; older CI listeners potentially experience even higher listening effort with greater signal degradation than younger CI listeners. This study used pupillometry as a physiological index of listening effort to examine whether age, spectral resolution, and their interaction affect listening effort in a simulation of CI listening. DESIGN Fifteen younger normal-hearing listeners (ages 18 to 31 years) and 15 older normal-hearing listeners (ages 65 to 75 years) participated in this experiment; they had normal hearing thresholds from 0.25 to 4 kHz. Participants repeated sentences presented in quiet that were either unprocessed or vocoded, simulating CI listening. Stimuli frequency spectra were limited to below 4 kHz (to control for effects of age-related high-frequency hearing loss), and spectral resolution was decreased by decreasing the number of vocoder channels, with 32-, 16-, and 8-channel conditions. Behavioral speech recognition scores and pupil dilation were recorded during this task. In addition, cognitive measures of working memory and processing speed were obtained to examine if individual differences in these measures predicted changes in pupil dilation. RESULTS For trials where the sentence was recalled correctly, there was a significant interaction between age and spectral resolution, with significantly greater pupil dilation in the older normal-hearing listeners for the 8- and 32-channel vocoded conditions. Cognitive measures did not predict pupil dilation. CONCLUSIONS There was a significant interaction between age and spectral resolution, such that older listeners appear to exert relatively higher listening effort than younger listeners when the signal is highly degraded, with the largest effects observed in the eight-channel condition. The clinical implication is that older listeners may be at higher risk for increased listening effort with a CI.
Collapse
Affiliation(s)
- Jordan C. Abramowitz
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD 20742
| | - Kristina DeRoy Milvae
- Department of Communicative Disorders and Sciences, University at Buffalo, Buffalo, NY 14214
| |
Collapse
|
18
|
Drouin JR, Flores S. Effects of training length on adaptation to noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2114-2127. [PMID: 38488452 DOI: 10.1121/10.0025273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 02/22/2024] [Indexed: 03/19/2024]
Abstract
Listeners show rapid perceptual learning of acoustically degraded speech, though the amount of exposure required to maximize speech adaptation is unspecified. The current work used a single-session design to examine the length of auditory training on perceptual learning for normal hearing listeners exposed to eight-channel noise-vocoded speech. Participants completed short, medium, or long training using a two-alternative forced choice sentence identification task with feedback. To assess learning and generalization, a 40-trial pre-test and post-test transcription task was administered using trained and novel sentences. Training results showed all groups performed near ceiling with no reliable differences. For test data, we evaluated changes in transcription accuracy using separate linear mixed models for trained or novel sentences. In both models, we observed a significant improvement in transcription at post-test relative to pre-test. Critically, the three training groups did not differ in the magnitude of improvement following training. Subsequent Bayes factors analysis evaluating the test by group interaction provided strong evidence in support of the null hypothesis. For these stimuli and procedure, results suggest increased training does not necessarily maximize learning outcomes; both passive and trained experience likely supported adaptation. Findings may contribute to rehabilitation recommendations for listeners adapting to degraded speech signals.
Collapse
Affiliation(s)
- Julia R Drouin
- Division of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Stephany Flores
- Department of Communication Sciences and Disorders, California State University Fullerton, Fullerton, California 92831, USA
| |
Collapse
|
19
|
Kopsch AC, Rahne T, Plontke SK, Wagner L. Influence of the Spread of the Electric Field on Speech Recognition in Cochlear Implant Users. Otol Neurotol 2024; 45:e221-e227. [PMID: 38238910 DOI: 10.1097/mao.0000000000004086] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2024]
Abstract
OBJECTIVE To investigate the correlation of word recognition with cochlear implant (CI) and spread of the electric field. STUDY DESIGN Prospective, noninterventional, experimental study. SETTING A tertiary referral center. PATIENTS Thirty-eight adult CI users with poor (n = 11), fair (n = 13), and good (n = 16) word recognition performance. MAIN OUTCOME MEASURE Transimpedances were measured after 37 μs. Word recognition score was recorded at 65 dB SPL for German monosyllables in quiet. Transimpedance half widths were calculated as a marker for spread of the electric field. RESULTS Narrow and broad spread of the electric field, i.e., small and large half widths, were observed in all word recognition performance groups. Most of the transimpedance matrices showed a pattern of expansion along the diagonal toward the apical electrode contacts. Word recognition was not correlated with transimpedance half widths. CONCLUSIONS The half width of the spread of the electric field showed no correlation with word recognition scores in our study population.
Collapse
Affiliation(s)
- Anna C Kopsch
- Department of Otorhinolaryngology, Head and Neck Surgery, Martin Luther University Halle-Wittenberg, University Medicine Halle, Halle (Saale), Germany
| | | | | | | |
Collapse
|
20
|
Quass GL, Kral A. Tripolar configuration and pulse shape in cochlear implants reduce channel interactions in the temporal domain. Hear Res 2024; 443:108953. [PMID: 38277881 DOI: 10.1016/j.heares.2024.108953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Revised: 01/08/2024] [Accepted: 01/11/2024] [Indexed: 01/28/2024]
Abstract
The present study investigates effects of current focusing and pulse shape on threshold, dynamic range, spread of excitation and channel interaction in the time domain using cochlear implant stimulation. The study was performed on 20 adult guinea pigs using a 6-channel animal cochlear implant, recording was performed in the auditory midbrain using a multielectrode array. After determining the best frequencies for individual recording contacts with acoustic stimulation, the ear was deafened and a cochlear implant was inserted into the cochlea. The position of the implant was controlled by x-ray. Stimulation with biphasic, pseudomonophasic and monophasic stimuli was performed with monopolar, monopolar with common ground, bipolar and tripolar configuration in two sets of experiments, allowing comparison of the effects of the different stimulation strategies on threshold, dynamic range, spread of excitation and channel interaction. Channel interaction was studied in the temporal domain, where two electrodes were activated with pulse trains and phase locking to these pulse trains in the midbrain was quantified. The results documented multifactorial influences on the response properties, with significant interaction between factors. Thresholds increased with increasing current focusing, but decreased with pseudomonophasic and monophasic pulse shapes. The results documented that current focusing, particularly tripolar configuration, effectively reduces channel interaction, but that also pseudomonophasic and monophasic stimulation and phase duration intensity coding reduce channel interactions.
Collapse
Affiliation(s)
- Gunnar L Quass
- Institute for AudioNeuroTechnology (VIANNA) & Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Cluster of Excellence "Hearing4All" (EXC 2177), Germany.
| | - Andrej Kral
- Institute for AudioNeuroTechnology (VIANNA) & Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Cluster of Excellence "Hearing4All" (EXC 2177), Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
21
|
Mu H, Smith D, Ng SH, Anand V, Le NHA, Dharmavarapu R, Khajehsaeidimahabadi Z, Richardson RT, Ruther P, Stoddart PR, Gricius H, Baravykas T, Gailevičius D, Seniutinas G, Katkus T, Juodkazis S. Fraxicon for Optical Applications with Aperture ∼1 mm: Characterisation Study. NANOMATERIALS (BASEL, SWITZERLAND) 2024; 14:287. [PMID: 38334558 PMCID: PMC10856946 DOI: 10.3390/nano14030287] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 12/19/2023] [Accepted: 01/22/2024] [Indexed: 02/10/2024]
Abstract
Emerging applications of optical technologies are driving the development of miniaturised light sources, which in turn require the fabrication of matching micro-optical elements with sub-1 mm cross-sections and high optical quality. This is particularly challenging for spatially constrained biomedical applications where reduced dimensionality is required, such as endoscopy, optogenetics, or optical implants. Planarisation of a lens by the Fresnel lens approach was adapted for a conical lens (axicon) and was made by direct femtosecond 780 nm/100 fs laser writing in the SZ2080™ polymer with a photo-initiator. Optical characterisation of the positive and negative fraxicons is presented. Numerical modelling of fraxicon optical performance under illumination by incoherent and spatially extended light sources is compared with the ideal case of plane-wave illumination. Considering the potential for rapid replication in soft polymers and resists, this approach holds great promise for the most demanding technological applications.
Collapse
Affiliation(s)
- Haoran Mu
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Daniel Smith
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Soon Hock Ng
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
- Melbourne Centre for Nanofabrication, Australian National Fabrication Facility, Clayton, VIC 3168, Australia
| | - Vijayakumar Anand
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
- Institute of Physics, University of Tartu, W. Ostwaldi 1, 50411 Tartu, Estonia
| | - Nguyen Hoai An Le
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Raghu Dharmavarapu
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Zahra Khajehsaeidimahabadi
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Rachael T. Richardson
- Bionics Institute, East Melbourne, VIC 3002, Australia;
- Medical Bionics Department, University of Melbourne, Fitzroy, VIC 3065, Australia
| | - Patrick Ruther
- Department of Microsystems Engineering (IMTEK), University of Freiburg, 79110 Freiburg im Breisgau, Germany;
- BrainLinks-BrainTools Center, University of Freiburg, 79110 Freiburg im Breisgau, Germany
| | - Paul R. Stoddart
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Henrikas Gricius
- Laser Research Center, Physics Faculty, Vilnius University, Sauletekio Ave. 10, 10223 Vilnius, Lithuania; (H.G.); (D.G.)
| | | | - Darius Gailevičius
- Laser Research Center, Physics Faculty, Vilnius University, Sauletekio Ave. 10, 10223 Vilnius, Lithuania; (H.G.); (D.G.)
| | - Gediminas Seniutinas
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
- Melbourne Centre for Nanofabrication, Australian National Fabrication Facility, Clayton, VIC 3168, Australia
| | - Tomas Katkus
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
| | - Saulius Juodkazis
- Optical Sciences Centre, ARC Training Centre in Surface Engineering for Advanced Materials (SEAM), Swinburne University of Technology, Hawthorn, VIC 3122, Australia; (H.M.); (D.S.); (N.H.A.L.); (R.D.); (Z.K.); (P.R.S.); (G.S.); (T.K.); (S.J.)
- Laser Research Center, Physics Faculty, Vilnius University, Sauletekio Ave. 10, 10223 Vilnius, Lithuania; (H.G.); (D.G.)
- WRH Program International Research Frontiers Initiative (IRFI) Tokyo Institute of Technology, Nagatsuta-cho, Midori-ku, Yokohama 226-8503, Japan
| |
Collapse
|
22
|
Anderson SR, Burg E, Suveg L, Litovsky RY. Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. Trends Hear 2024; 28:23312165241229880. [PMID: 38545645 PMCID: PMC10976506 DOI: 10.1177/23312165241229880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 04/01/2024] Open
Abstract
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical School, Aurora, CO, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lukas Suveg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
23
|
Merrill K, Muller L, Beim JA, Hehrmann P, Swan D, Alfsmann D, Spahr T, Litvak L, Oxenham AJ, Tward AD. CompHEAR: A Customizable and Scalable Web-Enabled Auditory Performance Evaluation Platform for Cochlear Implant Sound Processing Research. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.22.573126. [PMID: 38187767 PMCID: PMC10769353 DOI: 10.1101/2023.12.22.573126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
Objective Cochlear implants (CIs) are auditory prostheses for individuals with severe to profound hearing loss, offering substantial but incomplete restoration of hearing function by stimulating the auditory nerve using electrodes. However, progress in CI performance and innovation has been constrained by the inability to rapidly test multiple sound processing strategies. Current research interfaces provided by major CI manufacturers have limitations in supporting a wide range of auditory experiments due to portability, programming difficulties, and the lack of direct comparison between sound processing algorithms. To address these limitations, we present the CompHEAR research platform, designed specifically for the Cochlear Implant Hackathon, enabling researchers to conduct diverse auditory experiments on a large scale. Study Design Quasi-experimental. Setting Virtual. Methods CompHEAR is an open-source, user-friendly platform which offers flexibility and ease of customization, allowing researchers to set up a broad set of auditory experiments. CompHEAR employs a vocoder to simulate novel sound coding strategies for CIs. It facilitates even distribution of listening tasks among participants and delivers real-time metrics for evaluation. The software architecture underlies the platform's flexibility in experimental design and its wide range of applications in sound processing research. Results Performance testing of the CompHEAR platform ensured that it could support at least 10,000 concurrent users. The CompHEAR platform was successfully implemented during the COVID-19 pandemic and enabled global collaboration for the CI Hackathon (www.cihackathon.com). Conclusion The CompHEAR platform is a useful research tool that permits comparing diverse signal processing strategies across a variety of auditory tasks with crowdsourced judging. Its versatility, scalability, and ease of use can enable further research with the goal of promoting advancements in cochlear implant performance and improved patient outcomes.
Collapse
Affiliation(s)
- Kris Merrill
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco
| | - Leah Muller
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco
| | - Jordan A Beim
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | | | | | | | | | | | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Aaron D Tward
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco
| |
Collapse
|
24
|
Levin M, Zaltz Y. Voice Discrimination in Quiet and in Background Noise by Simulated and Real Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:5169-5186. [PMID: 37992412 DOI: 10.1044/2023_jslhr-23-00019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
PURPOSE Cochlear implant (CI) users demonstrate poor voice discrimination (VD) in quiet conditions based on the speaker's fundamental frequency (fo) and formant frequencies (i.e., vocal-tract length [VTL]). Our purpose was to examine the effect of background noise at levels that allow good speech recognition thresholds (SRTs) on VD via acoustic CI simulations and CI hearing. METHOD Forty-eight normal-hearing (NH) listeners who listened via noise-excited (n = 20) or sinewave (n = 28) vocoders and 10 prelingually deaf CI users (i.e., whose hearing loss began before language acquisition) participated in the study. First, the signal-to-noise ratio (SNR) that yields 70.7% correct SRT was assessed using an adaptive sentence-in-noise test. Next, the CI simulation listeners performed 12 adaptive VDs: six in quiet conditions, two with each cue (fo, VTL, fo + VTL), and six amid speech-shaped noise. The CI participants performed six VDs: one with each cue, in quiet and amid noise. SNR at VD testing was 5 dB higher than the individual's SRT in noise (SRTn +5 dB). RESULTS Results showed the following: (a) Better VD was achieved via the noise-excited than the sinewave vocoder, with the noise-excited vocoder better mimicking CI VD; (b) background noise had a limited negative effect on VD, only for the CI simulation listeners; and (c) there was a significant association between SNR at testing and VTL VD only for the CI simulation listeners. CONCLUSIONS For NH listeners who listen to CI simulations, noise that allows good SRT can nevertheless impede VD, probably because VD depends more on bottom-up sensory processing. Conversely, for prelingually deaf CI users, noise that allows good SRT hardly affects VD, suggesting that they rely strongly on bottom-up processing for both VD and speech recognition.
Collapse
Affiliation(s)
- Michal Levin
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
| | - Yael Zaltz
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
- Sagol School of Neuroscience, Tel Aviv University, Israel
| |
Collapse
|
25
|
Azees AA, Thompson AC, Thomas R, Zhou J, Ruther P, Wise AK, Ajay EA, Garrett DJ, Quigley A, Fallon JB, Richardson RT. Spread of activation and interaction between channels with multi-channel optogenetic stimulation in the mouse cochlea. Hear Res 2023; 440:108911. [PMID: 37977051 DOI: 10.1016/j.heares.2023.108911] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 10/19/2023] [Accepted: 11/02/2023] [Indexed: 11/19/2023]
Abstract
For individuals with severe to profound hearing loss resulting from irreversibly damaged hair cells, cochlear implants can be used to restore hearing by delivering electrical stimulation directly to the spiral ganglion neurons. However, current spread lowers the spatial resolution of neural activation. Since light can be easily confined, optogenetics is a technique that has the potential to improve the precision of neural activation, whereby visible light is used to stimulate neurons that are modified with light-sensitive opsins. This study compares the spread of neural activity across the inferior colliculus of the auditory midbrain during electrical and optical stimulation in the cochlea of acutely deafened mice with opsin-modified spiral ganglion neurons (H134R variant of the channelrhodopsin-2). Monopolar electrical stimulation was delivered via each of four 0.2 mm wide platinum electrode rings at 0.6 mm centre-to-centre spacing, whereas 453 nm wavelength light was delivered via each of five 0.22 × 0.27 mm micro-light emitting diodes (LEDs) at 0.52 mm centre-to-centre spacing. Channel interactions were also quantified by threshold changes during simultaneous stimulation by pairs of electrodes or micro-LEDs at different distances between the electrodes (0.6, 1.2 and 1.8 mm) or micro-LEDs (0.52, 1.04, 1.56 and 2.08 mm). The spread of activation resulting from single channel optical stimulation was approximately half that of monopolar electrical stimulation as measured at two levels of discrimination above threshold (p<0.001), whereas there was no significant difference between optical stimulation in opsin-modified deafened mice and pure tone acoustic stimulation in normal-hearing mice. During simultaneous micro-LED stimulation, there were minimal channel interactions for all micro-LED spacings tested. For neighbouring micro-LEDs/electrodes, the relative influence on threshold was 13-fold less for optical stimulation compared electrical stimulation (p<0.05). The outcomes of this study show that the higher spatial precision of optogenetic stimulation results in reduced channel interaction compared to electrical stimulation, which could increase the number of independent channels in a cochlear implant. Increased spatial resolution and the ability to activate more than one channel simultaneously could lead to better speech perception in cochlear implant recipients.
Collapse
Affiliation(s)
- Ajmal A Azees
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Department of Electrical and Biomedical Engineering, RMIT University, Melbourne, VIC 3000, Australia
| | - Alex C Thompson
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Medical Bionics Department, University of Melbourne, East Melbourne, VIC, Australia
| | - Ross Thomas
- The Bionics Institute, East Melbourne, VIC 3002, Australia
| | - Jenny Zhou
- The Bionics Institute, East Melbourne, VIC 3002, Australia
| | - Patrick Ruther
- Department of Microsystems Engineering (IMTEK), University of Freiburg, Freiburg 79110, Germany; BrainLinks-BrainTools Center, University of Freiburg, Freiburg 79110, Germany
| | - Andrew K Wise
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Department of Surgery (Otolaryngology), University of Melbourne, Melbourne, VIC 3002, Australia; Medical Bionics Department, University of Melbourne, East Melbourne, VIC, Australia
| | - Elise A Ajay
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Faculty of Engineering and Information Technology, University of Melbourne, Melbourne, VIC, Australia
| | - David J Garrett
- Department of Electrical and Biomedical Engineering, RMIT University, Melbourne, VIC 3000, Australia
| | - Anita Quigley
- Department of Electrical and Biomedical Engineering, RMIT University, Melbourne, VIC 3000, Australia; Department of Medicine, University of Melbourne, St Vincent's Hospital, Melbourne, VIC 3065, Australia; The Aikenhead Centre for Medical Discovery, St Vincent's Hospital, Melbourne, VIC 3065, Australia
| | - James B Fallon
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Department of Surgery (Otolaryngology), University of Melbourne, Melbourne, VIC 3002, Australia; Medical Bionics Department, University of Melbourne, East Melbourne, VIC, Australia
| | - Rachael T Richardson
- The Bionics Institute, East Melbourne, VIC 3002, Australia; Department of Surgery (Otolaryngology), University of Melbourne, Melbourne, VIC 3002, Australia; Medical Bionics Department, University of Melbourne, East Melbourne, VIC, Australia.
| |
Collapse
|
26
|
Nix EP, Thompson NJ, Brown KD, Dedmon MM, Selleck AM, Overton AB, Canfarotta MW, Dillon MT. Incidence of Cochlear Implant Electrode Contacts in the Functional Acoustic Hearing Region and the Influence on Speech Recognition with Electric-Acoustic Stimulation. Otol Neurotol 2023; 44:1004-1010. [PMID: 37758328 PMCID: PMC10840620 DOI: 10.1097/mao.0000000000004021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
OBJECTIVES To investigate the incidence of electrode contacts within the functional acoustic hearing region in cochlear implant (CI) recipients and to assess its influence on speech recognition for electric-acoustic stimulation (EAS) users. STUDY DESIGN Retrospective review. SETTING Tertiary referral center. PATIENTS One hundred five CI recipients with functional acoustic hearing preservation (≤80 dB HL at 250 Hz). INTERVENTIONS Cochlear implantation with a 24-, 28-, or 31.5-mm lateral wall electrode array. MAIN OUTCOME MEASURES Angular insertion depth (AID) of individual contacts was determined from imaging. Unaided acoustic thresholds and AID were used to calculate the proximity of contacts to the functional acoustic hearing region. The association between proximity values and speech recognition in quiet and noise for EAS users at 6 months postactivation was reviewed. RESULTS Sixty percent of cases had one or more contacts within the functional acoustic hearing region. Proximity was not significantly associated with speech recognition in quiet. Better performance in noise was observed for cases with close correspondence between the most apical contact and the upper edge of residual hearing, with poorer results for increasing proximity values in either the basal or apical direction ( r14 = 0.48, p = 0.043; r18 = -0.41, p = 0.045, respectively). CONCLUSION There was a high incidence of electrode contacts within the functional acoustic hearing region, which is not accounted for with default mapping procedures. The variability in outcomes across EAS users with default maps may be due in part to electric-on-acoustic interference, electric frequency-to-place mismatch, and/or failure to stimulate regions intermediate between the most apical electrode contact and the functional acoustic hearing region.
Collapse
Affiliation(s)
- Evan P Nix
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | - Nicholas J Thompson
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | - Kevin D Brown
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | - Matthew M Dedmon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | - A Morgan Selleck
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | | | - Michael W Canfarotta
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| | - Margaret T Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC
| |
Collapse
|
27
|
Hansen TA, O’Leary RM, Svirsky MA, Wingfield A. Self-pacing ameliorates recall deficit when listening to vocoded discourse: a cochlear implant simulation. Front Psychol 2023; 14:1225752. [PMID: 38054180 PMCID: PMC10694252 DOI: 10.3389/fpsyg.2023.1225752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/07/2023] [Indexed: 12/07/2023] Open
Abstract
Introduction In spite of its apparent ease, comprehension of spoken discourse represents a complex linguistic and cognitive operation. The difficulty of such an operation can increase when the speech is degraded, as is the case with cochlear implant users. However, the additional challenges imposed by degraded speech may be mitigated to some extent by the linguistic context and pace of presentation. Methods An experiment is reported in which young adults with age-normal hearing recalled discourse passages heard with clear speech or with noise-band vocoding used to simulate the sound of speech produced by a cochlear implant. Passages were varied in inter-word predictability and presented either without interruption or in a self-pacing format that allowed the listener to control the rate at which the information was delivered. Results Results showed that discourse heard with clear speech was better recalled than discourse heard with vocoded speech, discourse with a higher average inter-word predictability was better recalled than discourse with a lower average inter-word predictability, and self-paced passages were recalled better than those heard without interruption. Of special interest was the semantic hierarchy effect: the tendency for listeners to show better recall for main ideas than mid-level information or detail from a passage as an index of listeners' ability to understand the meaning of a passage. The data revealed a significant effect of inter-word predictability, in that passages with lower predictability had an attenuated semantic hierarchy effect relative to higher-predictability passages. Discussion Results are discussed in terms of broadening cochlear implant outcome measures beyond current clinical measures that focus on single-word and sentence repetition.
Collapse
Affiliation(s)
- Thomas A. Hansen
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Ryan M. O’Leary
- Department of Psychology, Brandeis University, Waltham, MA, United States
| | - Mario A. Svirsky
- Department of Otolaryngology, NYU Langone Medical Center, New York, NY, United States
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, MA, United States
| |
Collapse
|
28
|
Quimby AE, Wei K, Adewole D, Eliades S, Cullen DK, Brant JA. Signal processing and stimulation potential within the ascending auditory pathway: a review. Front Neurosci 2023; 17:1277627. [PMID: 38027521 PMCID: PMC10658786 DOI: 10.3389/fnins.2023.1277627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
The human auditory system encodes sound with a high degree of temporal and spectral resolution. When hearing fails, existing neuroprosthetics such as cochlear implants may partially restore hearing through stimulation of auditory neurons at the level of the cochlea, though not without limitations inherent to electrical stimulation. Novel approaches to hearing restoration, such as optogenetics, offer the potential of improved performance. We review signal processing in the ascending auditory pathway and the current state of conventional and emerging neural stimulation strategies at various levels of the auditory system.
Collapse
Affiliation(s)
- Alexandra E. Quimby
- Department of Otolaryngology and Communication Sciences, SUNY Upstate Medical University, Syracuse, NY, United States
| | - Kimberly Wei
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
| | - Dayo Adewole
- Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, PA, United States
| | - Steven Eliades
- Department of Head and Neck Surgery and Communication Sciences, Duke University, Durham, NC, United States
| | - D. Kacy Cullen
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Corporal Michael J. Crescenz Veterans Affairs Medical Center, Philadelphia, PA, United States
| | - Jason A. Brant
- Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, United States
- Department of Otorhinolaryngology – Head and Neck Surgery, University of Pennsylvania, Philadelphia, PA, United States
| |
Collapse
|
29
|
Cychosz M, Xu K, Fu QJ. Effects of spectral smearing on speech understanding and masking release in simulated bilateral cochlear implants. PLoS One 2023; 18:e0287728. [PMID: 37917727 PMCID: PMC10621938 DOI: 10.1371/journal.pone.0287728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 06/11/2023] [Indexed: 11/04/2023] Open
Abstract
Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
30
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
31
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
32
|
Khurana L, Harczos T, Moser T, Jablonski L. En route to sound coding strategies for optical cochlear implants. iScience 2023; 26:107725. [PMID: 37720089 PMCID: PMC10502376 DOI: 10.1016/j.isci.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2023] Open
Abstract
Hearing loss is the most common human sensory deficit. Severe-to-complete sensorineural hearing loss is often treated by electrical cochlear implants (eCIs) bypassing dysfunctional or lost hair cells by direct stimulation of the auditory nerve. The wide current spread from each intracochlear electrode array contact activates large sets of tonotopically organized neurons limiting spectral selectivity of sound coding. Despite many efforts, an increase in the number of independent eCI stimulation channels seems impossible to achieve. Light, which can be better confined in space than electric current may help optical cochlear implants (oCIs) to overcome eCI shortcomings. In this review, we present the current state of the optogenetic sound encoding. We highlight optical sound coding strategy development capitalizing on the optical stimulation that requires fine-grained, fast, and power-efficient real-time sound processing controlling dozens of microscale optical emitters as an emerging research area.
Collapse
Affiliation(s)
- Lakshay Khurana
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Auditory Neuroscience and Synaptic Nanophysiology Group, Max-Planck-Institute for Multidisciplinary Sciences, Göttingen, Germany
- Junior Research Group “Computational Neuroscience and Neuroengineering”, Göttingen, Germany
- The Doctoral Program “Sensory and Motor Neuroscience”, Göttingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
| | - Tamas Harczos
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
| | - Tobias Moser
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Auditory Neuroscience and Synaptic Nanophysiology Group, Max-Planck-Institute for Multidisciplinary Sciences, Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
| | - Lukasz Jablonski
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Junior Research Group “Computational Neuroscience and Neuroengineering”, Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
33
|
Wagner L, Plontke SK, Rahne T. An analysis of the spread of electric field within the cochlea for different devices including custom-made electrodes for subtotal cochleoectomy. PLoS One 2023; 18:e0287216. [PMID: 37682960 PMCID: PMC10490913 DOI: 10.1371/journal.pone.0287216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Accepted: 06/01/2023] [Indexed: 09/10/2023] Open
Abstract
OBJECTIVE Cochlear implants (CIs) can restore hearing not only in patients with profound hearing loss and deafness, but also in patients following tumour removal of intra-cochlear schwannomas. In such cases, design and placement differ from conventional electrode insertion, in which the cochlea remains filled with fluid. Despite these technical and surgical differences, previous studies have tended to show positive results in speech perception in tumour patients. The purpose of this study is to retrospectively evaluate the ability to predict speech recognition outcomes using individual electric field spreads and to investigate worldwide unique tumour cases. STUDY DESIGN In a retrospective analysis in a university tertiary center electric field spreads were compared between two groups of electrode designs implanted between 2009 and 2020 i.e., between lateral wall electrodes and custom-made perimodiolar electrode carriers from the same company. The voltage gradients were analysed and grouped with speech recognition results. RESULTS Differences in electrical field spreads were found between lateral wall electrodes and the custom-made perimodiolar electrodes, whereas a significant influence of electric fields on scores in speech recognition cannot be demonstrated. CONCLUSION Prediction of speech recognition outcome based on electric field propagation results seems not feasible. Significant differences in field spread between electrode arrays can be clearly demonstrated. This observation and its relevance to hearing treatment and speech recognition should therefore be further investigated in upcoming studies.
Collapse
Affiliation(s)
- Luise Wagner
- Department of Otorhinolaryngology and Halle Hearing and Implant Center, Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| | - Stefan K. Plontke
- Department of Otorhinolaryngology and Halle Hearing and Implant Center, Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| | - Torsten Rahne
- Department of Otorhinolaryngology and Halle Hearing and Implant Center, Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| |
Collapse
|
34
|
Michael M, Wolf BJ, Klinge-Strahl A, Jeschke M, Moser T, Dieter A. Devising a framework of optogenetic coding in the auditory pathway: Insights from auditory midbrain recordings. Brain Stimul 2023; 16:1486-1500. [PMID: 37778456 DOI: 10.1016/j.brs.2023.09.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Revised: 09/19/2023] [Accepted: 09/21/2023] [Indexed: 10/03/2023] Open
Abstract
Cochlear implants (CIs) restore activity in the deafened auditory system via electrical stimulation of the auditory nerve. As the spread of electric current in biological tissues is rather broad, the spectral information provided by electrical CIs is limited. Optogenetic stimulation of the auditory nerve has been suggested for artificial sound coding with improved spectral selectivity, as light can be conveniently confined in space. Yet, the foundations for optogenetic sound coding strategies remain to be established. Here, we parametrized stimulus-response-relationships of the auditory pathway in gerbils for optogenetic stimulation. Upon activation of the auditory pathway by waveguide-based optogenetic stimulation of the spiral ganglion, we recorded neuronal activity of the auditory midbrain, in which neural representations of spectral, temporal, and intensity information can be found. Screening a wide range of optical stimuli and taking the properties of optical CI emitters into account, we aimed to optimize stimulus paradigms for potent and energy-efficient activation of the auditory pathway. We report that efficient optogenetic coding builds on neural integration of millisecond stimuli built from microsecond light pulses, which optimally accommodate power-efficient laser diode operation. Moreover, we performed an activity-level-dependent comparison of optogenetic and acoustic stimulation in order to estimate the dynamic range and the maximal stimulation intensity amenable to single channel optogenetic sound encoding, and indicate that it complies well with speech comprehension in a typical conversation (65 dB). Our results provide a first framework for the development of coding strategies for future optogenetic hearing restoration.
Collapse
Affiliation(s)
- Maria Michael
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany
| | - Bettina Julia Wolf
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany; Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, 37077, Göttingen, Germany; Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075, Göttingen, Germany
| | - Astrid Klinge-Strahl
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany; Department of Otolaryngology, University Medical Center Göttingen, 37075, Göttingen, Germany
| | - Marcus Jeschke
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany; Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, 37077, Göttingen, Germany; Cognitive Hearing in Primates (CHiP) Group, German Primate Center, 37077, Göttingen, Germany
| | - Tobias Moser
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany; Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, 37077, Göttingen, Germany; Cluster of Excellence "Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells" (MBExC), University of Göttingen, 37075, Göttingen, Germany; Auditory Neuroscience and Synaptic Nanophysiology Group, Max Planck Institute for Multidisciplinary Science, Göttingen, Germany.
| | - Alexander Dieter
- Institute for Auditory Neuroscience and InnerEarLab, University Medical Center Göttingen, 37075, Göttingen, Germany; Göttingen Graduate Center for Neurosciences, Biophysic, and Molecular Biosciences, 37077, Göttingen, Germany; Department of Neurophysiology, MCTN, Medical Faculty Mannheim, Heidelberg University, 68167, Mannheim, Germany.
| |
Collapse
|
35
|
Patro C, Bennaim A, Shephard E. Effects of spectral degradation on gated word recognition. JASA EXPRESS LETTERS 2023; 3:084401. [PMID: 37561082 DOI: 10.1121/10.0020646] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/14/2023] [Accepted: 07/28/2023] [Indexed: 08/11/2023]
Abstract
Although much is known about how normal-hearing listeners process spoken words under ideal listening conditions, little is known about how a degraded signal, such as speech transmitted via cochlear implants, affects the word recognition process. In this study, gated word recognition performance was measured with the goal of describing the time course of word identification by using a noise-band vocoder simulation. The results of this study demonstrate that spectral degradations can impact the temporal aspects of speech processing. These results also provide insights into the potential advantages of enhancing spectral resolution in the processing of spoken words.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| | - Ariana Bennaim
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| | - Ellen Shephard
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, , ,
| |
Collapse
|
36
|
Tao DD, Shi B, Galvin JJ, Liu JS, Fu QJ. Frequency detection, frequency discrimination, and spectro-temporal pattern perception in older and younger typically hearing adults. Heliyon 2023; 9:e18922. [PMID: 37583764 PMCID: PMC10424075 DOI: 10.1016/j.heliyon.2023.e18922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 07/14/2023] [Accepted: 08/02/2023] [Indexed: 08/17/2023] Open
Abstract
Elderly adults often experience difficulties in speech understanding, possibly due to age-related deficits in frequency perception. It is unclear whether age-related deficits in frequency perception differ between the apical or basal regions of the cochlea. It is also unclear how aging might differently affect frequency discrimination or detection of a change in frequency within a stimulus. In the present study, pure-tone frequency thresholds were measured in 19 older (61-74 years) and 20 younger (22-28 years) typically hearing adults. Participants were asked to discriminate between reference and probe frequencies or to detect changes in frequency within a probe stimulus. Broadband spectro-temporal pattern perception was also measured using the spectro-temporal modulated ripple test (SMRT). Frequency thresholds were significantly poorer in the basal than in the apical region of the cochlea; the deficit in the basal region was 2 times larger for the older than for the younger group. Frequency thresholds were significantly poorer in the older group, especially in the basal region where frequency detection thresholds were 3.9 times poorer for the older than for the younger group. SMRT thresholds were 1.5 times better for the younger than for the older group. Significant age effects were observed for SMRT thresholds and for frequency thresholds only in the basal region. SMRT thresholds were significantly correlated with frequency thresholds only in the older group. The poorer frequency and spectro-temporal pattern perception may contribute to age-related deficits in speech perception, even when audiometric thresholds are nearly normal.
Collapse
Affiliation(s)
- Duo-Duo Tao
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Bin Shi
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - John J. Galvin
- House Institute Foundation, Los Angeles, CA, 90057, USA
- University Hospital Center of Tours, Tours, 37000, France
| | - Ji-Sheng Liu
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| |
Collapse
|
37
|
Dillon MT, Buss E, Johnson AD, Canfarotta MW, O’Connell BP. Comparison of Two Place-Based Mapping Procedures on Masked Sentence Recognition as a Function of Electrode Array Angular Insertion Depth and Presence of Acoustic Low-Frequency Information: A Simulation Study. Audiol Neurootol 2023; 28:478-487. [PMID: 37482054 PMCID: PMC10948008 DOI: 10.1159/000531262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 05/23/2023] [Indexed: 07/25/2023] Open
Abstract
INTRODUCTION Cochlear implant (CI) and electric-acoustic stimulation (EAS) users may experience better performance with maps that align the electric filter frequencies to the cochlear place frequencies, known as place-based maps, than with maps that present spectrally shifted information. Individual place-based mapping procedures differ in the frequency content that is aligned to cochlear tonotopicity versus discarded or spectrally shifted. The performance benefit with different place-based maps may vary due to individual differences in angular insertion depth (AID) of the electrode array and whether functional acoustic low-frequency information is available in the implanted ear. The present study compared masked speech recognition with two types of place-based maps as a function of AID and presence of acoustic low-frequency information. METHODS Sixty adults with normal hearing listened acutely to CI or EAS simulations of two types of place-based maps for one of three cases of electrode arrays at shallow AIDs. The strict place-based (Strict-PB) map aligned the low- and mid-frequency information to cochlear tonotopicity and discarded information below the frequency associated with the most apical electrode contact. The alternative place-based map (LFshift-PB) aligned the mid-frequency information to cochlear tonotopicity and provided more of the speech spectrum by compressing low-frequency information on the apical electrode contacts (i.e., <1 kHz). Three actual cases of a 12-channel, 24-mm electrode array were simulated by assigning the carrier frequency for an individual channel as the cochlear place frequency of the associated electrode contact. The AID and cochlear place frequency for the most apical electrode contact were 460° and 498 Hz for case 1, 389° and 728 Hz for case 2, and 335° and 987 Hz for case 3, respectively. RESULTS Generally, better performance was observed with the Strict-PB maps for cases 1 and 2, where mismatches were 2-4 octaves for the most apical channel with the LFshift-PB map. Similar performance was observed between maps for case 3. For the CI simulations, performance with the Strict-PB map declined with decreases in AID, while performance with the LFshift-PB map remained stable across cases. For the EAS simulations, performance with the Strict-PB map remained stable across cases, while performance with the LFshift-PB map improved with decreases in AID. CONCLUSIONS Listeners demonstrated differences with the Strict-PB versus LFshift-PB maps as a function of AID and whether acoustic low-frequency information was available (CI vs. EAS). These data support the use of the Strict-PB mapping procedure for AIDs ≥335°, though further study including time for acclimatization in CI and EAS users is warranted.
Collapse
Affiliation(s)
- Margaret T. Dillon
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Division of Speech and Hearing Sciences, Department of Allied Health Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Emily Buss
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Alec D. Johnson
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Michael W. Canfarotta
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Brendan P. O’Connell
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Charlotte Eye Ear Nose & Throat Associates, P.A., Charlotte, NC, USA
| |
Collapse
|
38
|
Huang Z, Chen S, Zhang G, Almadhor A, Li R, Li M, Abbas M, Nguyen Le B, Zhang J, Huang Y. Nanocatalysts as fast and powerful medical intervention: Bridging cochlear implant therapies and advanced modelling using Hidden Markov Models (HMMs) for effective treatment of infections. ENVIRONMENTAL RESEARCH 2023:116285. [PMID: 37301496 DOI: 10.1016/j.envres.2023.116285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 05/23/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
As human population growth and waste from technologically advanced industries threaten to destabilise our delicate ecological equilibrium, the global spotlight intensifies on environmental contamination and climate-related changes. These challenges extend beyond our external environment and have significant effects on our internal ecosystems. The inner ear, which is responsible for balance and auditory perception, is a prime example. When these sensory mechanisms are impaired, disorders such as deafness can develop. Traditional treatment methods, including systemic antibiotics, are frequently ineffective due to inadequate inner ear penetration. Conventional techniques for administering substances to the inner ear fail to obtain adequate concentrations as well. In this context, cochlear implants laden with nanocatalysts emerge as a promising strategy for the targeted treatment of inner ear infections. Coated with biocompatible nanoparticles containing specific nanocatalysts, these implants can degrade or neutralise contaminants linked to inner ear infections. This method enables the controlled release of nanocatalysts directly at the infection site, thereby maximising therapeutic efficacy and minimising adverse effects. In vivo and in vitro studies have demonstrated that these implants are effective at eliminating infections, reducing inflammation, and fostering tissue regeneration in the ear. This study investigates the application of hidden Markov models (HMMs) to nanocatalyst-loaded cochlear implants. The HMM is trained on surgical phases in order to accurately identify the various phases associated with implant utilisation. This facilitates the precision placement of surgical instruments within the ear, with a location accuracy between 91% and 95% and a standard deviation between 1% and 5% for both sites. In conclusion, nanocatalysts serve as potent medicinal instruments, bridging cochlear implant therapies and advanced modelling utilising hidden Markov models for the effective treatment of inner ear infections. Cochlear implants loaded with nanocatalysts offer a promising method to combat inner ear infections and enhance patient outcomes by addressing the limitations of conventional treatments.
Collapse
|
39
|
Zerche M, Wrobel C, Kusch K, Moser T, Mager T. Channelrhodopsin fluorescent tag replacement for clinical translation of optogenetic hearing restoration. MOLECULAR THERAPY - METHODS & CLINICAL DEVELOPMENT 2023; 29:202-212. [PMID: 37081855 PMCID: PMC10111946 DOI: 10.1016/j.omtm.2023.03.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/22/2022] [Accepted: 03/16/2023] [Indexed: 04/04/2023]
Abstract
Sensory restoration by optogenetic neurostimulation provides a promising future alternative to current electrical stimulation approaches. So far, channelrhodopsins (ChRs) typically contain a C-terminal fluorescent protein (FP) tag for visualization that potentially poses an additional risk for clinical translation. Previous work indicated a reduction of optogenetic stimulation efficacy upon FP removal. Here, we further optimized the fast-gating, red-light-activated ChR f-Chrimson to achieve efficient optogenetic stimulation in the absence of the C-terminal FP. Upon FP removal, we observed a massive amplitude reduction of photocurrents in transfected cells in vitro and of optogenetically evoked activity of the adeno-associated virus (AAV) vector-transduced auditory nerve in mice in vivo. Increasing the AAV vector dose restored optogenetically evoked auditory nerve activity but was confounded by neural loss. Of various C-terminal modifications, we found the replacement of the FP by the Kir2.1 trafficking sequence (TSKir2.1) to best restore both photocurrents and optogenetically evoked auditory nerve activity with only mild neural loss few months after dosing. In conclusion, we consider f-Chrimson-TSKir2.1 to be a promising candidate for clinical translation of optogenetic neurostimulation such as by future optical cochlear implants.
Collapse
|
40
|
Lindenbeck MJ, Majdak P, Srinivasan S, Laback B. Pitch discrimination in electric hearing with inconsistent and consistent amplitude-modulation and inter-pulse rate cues. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:3268. [PMID: 37307025 PMCID: PMC10264086 DOI: 10.1121/10.0019452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Revised: 04/28/2023] [Accepted: 05/01/2023] [Indexed: 06/13/2023]
Abstract
Users of cochlear implants (CIs) struggle in situations that require selective hearing to focus on a target source while ignoring other sources. One major reason for that is the limited access to timing cues such as temporal pitch or interaural time differences (ITDs). Various approaches to improve timing-cue sensitivity while maintaining speech understanding have been proposed, among them inserting extra pulses with short inter-pulse intervals (SIPIs) into amplitude-modulated (AM) high-rate pulse trains. Indeed, SIPI rates matching the naturally occurring AM rates improve pitch discrimination. For ITD, however, low SIPI rates are required, potentially mismatching the naturally occurring AM rates and thus creating unknown pitch effects. In this study, we investigated the perceptual contribution of AM and SIPI rate to pitch discrimination in five CI listeners and with two AM depths (0.1 and 0.5). Our results show that the SIPI-rate cue generally dominated the percept for both consistent and inconsistent cues. When tested with inconsistent cues, also the AM rate contributed, however, at the large AM depth only. These findings have implications when aiming at jointly improving temporal-pitch and ITD sensitivity in a future mixed-rate stimulation approach.
Collapse
Affiliation(s)
- Martin J Lindenbeck
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| | - Piotr Majdak
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| | - Sridhar Srinivasan
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| | - Bernhard Laback
- Acoustics Research Institute, Austrian Academy of Sciences, Wohllebengasse 12-14, A-1040 Vienna, Austria
| |
Collapse
|
41
|
Berg KA, Chen C, Noble JH, Dawant BM, Dwyer RT, Labadie RF, Gifford RH. Effects of the Number of Channels and Channel Stimulation Rate on Speech Recognition and Sound Quality Using Precurved Electrode Arrays. Am J Audiol 2023; 32:403-416. [PMID: 37249492 PMCID: PMC10468116 DOI: 10.1044/2023_aja-22-00032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 06/16/2022] [Accepted: 02/05/2023] [Indexed: 05/31/2023] Open
Abstract
PURPOSE This study investigated the relationship between the number of active electrodes, channel stimulation rate, and their interaction on speech recognition and sound quality measures while controlling for electrode placement. Cochlear implant (CI) recipients with precurved electrode arrays placed entirely within scala tympani and closer to the modiolus were hypothesized to be able to utilize more channels and possibly higher stimulation rates to achieve better speech recognition performance and sound quality ratings than recipients in previous studies. METHOD Participants included seven postlingually deafened adult CI recipients with Advanced Bionics Mid-Scala electrode arrays confirmed to be entirely within scala tympani using postoperative computerized tomography. Twelve conditions were tested using four, eight, 12, and 16 electrodes and channel stimulation rates of 600 pulse per second (pps), 1,200 pps, and each participant's maximum allowable rate (1,245-4,800 pps). Measures of speech recognition and sound quality were acutely assessed. RESULTS For the effect of channels, results showed no significant improvements beyond eight channels for all measures. For the effect of channel stimulation rate, results showed no significant improvements with higher rates, suggesting that 600 pps was sufficient for maximum speech recognition performance and sound quality ratings. However, across all conditions, there was a significant relationship between mean electrode-to-modiolus distance and all measures, suggesting that a lower mean electrode-to-modiolus distance was correlated with higher speech recognition scores and sound quality ratings. CONCLUSION These findings suggest that even well-placed precurved electrode array recipients may not be able to take advantage of more than eight channels or higher channel stimulation rates (> 600 pps), but that closer electrode array placement to the modiolus correlates with better outcomes for these recipients.
Collapse
Affiliation(s)
- Katelyn A. Berg
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Chen Chen
- Research and Technology, Advanced Bionics, LLC, Valencia, CA
| | - Jack H. Noble
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN
| | - Benoit M. Dawant
- Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN
| | - Robert T. Dwyer
- Research and Technology, Advanced Bionics, LLC, Valencia, CA
| | - Robert F. Labadie
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston
| | - René H. Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
42
|
de la Cruz-Pavía I, Eloy C, Perrineau-Hecklé P, Nazzi T, Cabrera L. Consonant bias in adult lexical processing under acoustically degraded listening conditions. JASA EXPRESS LETTERS 2023; 3:2892558. [PMID: 37220232 DOI: 10.1121/10.0019576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 05/05/2023] [Indexed: 05/25/2023]
Abstract
Consonants facilitate lexical processing across many languages, including French. This study investigates whether acoustic degradation affects this phonological bias in an auditory lexical decision task. French words were processed using an eight-band vocoder, degrading their frequency modulations (FM) while preserving original amplitude modulations (AM). Adult French natives were presented with these French words, preceded by similarly processed pseudoword primes sharing their vowels, consonants, or neither. Results reveal a consonant bias in the listeners' accuracy and response times, despite the reduced spectral and FM information. These degraded conditions resemble current cochlear-implant processors, and attest to the robustness of this phonological bias.
Collapse
Affiliation(s)
- Irene de la Cruz-Pavía
- Department of Linguistics and Basque Studies, Universidad del País Vasco/Euskal Herriko Unibertsitatea, Vitoria-Gasteiz 01006, Spain
| | - Coraline Eloy
- Integrative Neuroscience and Cognition Center, Université Paris Cité, Centre National de la Recherche Scientifique, Paris 75006, , , , ,
| | - Paula Perrineau-Hecklé
- Integrative Neuroscience and Cognition Center, Université Paris Cité, Centre National de la Recherche Scientifique, Paris 75006, , , , ,
| | - Thierry Nazzi
- Integrative Neuroscience and Cognition Center, Université Paris Cité, Centre National de la Recherche Scientifique, Paris 75006, , , , ,
| | - Laurianne Cabrera
- Integrative Neuroscience and Cognition Center, Université Paris Cité, Centre National de la Recherche Scientifique, Paris 75006, , , , ,
| |
Collapse
|
43
|
Saba JN, Ali H, Hansen JHL. The effects of estimation accuracy, estimation approach, and number of selected channels using formant-priority channel selection for an "n-of-m" sound processing strategy for cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:3100. [PMID: 37227411 PMCID: PMC10219683 DOI: 10.1121/10.0019416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 04/16/2023] [Accepted: 04/28/2023] [Indexed: 05/26/2023]
Abstract
Previously, selection of l channels was prioritized according to formant frequency locations in an l-of-n-of-m-based signal processing strategy to provide important voicing information independent of listening environments for cochlear implant (CI) users. In this study, ideal, or ground truth, formants were incorporated into the selection stage to determine the effect of accuracy on (1) subjective speech intelligibility, (2) objective channel selection patterns, and (3) objective stimulation patterns (current). An average +11% improvement (p < 0.05) was observed across six CI users in quiet, but not for noise or reverberation conditions. Analogous increases in channel selection and current for the upper range of F1 and a decrease across mid-frequencies with higher corresponding current, were both observed at the expense of noise-dominant channels. Objective channel selection patterns were analyzed a second time to determine the effects of estimation approach and number of selected channels (n). A significant effect of estimation approach was only observed in the noise and reverberation condition with minor differences in channel selection and significantly decreased stimulated current. Results suggest that estimation method, accuracy, and number of channels in the proposed strategy using ideal formants may improve intelligibility when corresponding stimulated current of formant channels are not masked by noise-dominant channels.
Collapse
Affiliation(s)
- Juliana N Saba
- University of Texas at Dallas, Center for Robust Speech Systems, Cochlear Implant Laboratory, 800 W. Campbell Rd, EC 33, Richardson, Texas 75080, USA
| | - Hussnain Ali
- University of Texas at Dallas, Center for Robust Speech Systems, Cochlear Implant Laboratory, 800 W. Campbell Rd, EC 33, Richardson, Texas 75080, USA
| | - John H L Hansen
- University of Texas at Dallas, Center for Robust Speech Systems, Cochlear Implant Laboratory, 800 W. Campbell Rd, EC 33, Richardson, Texas 75080, USA
| |
Collapse
|
44
|
Lambriks L, van Hoof M, Debruyne J, Janssen M, Chalupper J, van der Heijden K, Hof J, Hellingman K, Devocht E, George E. Imaging-based frequency mapping for cochlear implants - Evaluated using a daily randomized controlled trial. Front Neurosci 2023; 17:1119933. [PMID: 37123376 PMCID: PMC10133468 DOI: 10.3389/fnins.2023.1119933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2022] [Accepted: 03/03/2023] [Indexed: 05/02/2023] Open
Abstract
Background Due to variation in electrode design, insertion depth and cochlear morphology, patients with a cochlear implant (CI) often have to adapt to a substantial mismatch between the characteristic response frequencies of cochlear neurons and the stimulus frequencies assigned to electrode contacts. We introduce an imaging-based fitting intervention, which aimed to reduce frequency-to-place mismatch by aligning frequency mapping with the tonotopic position of electrodes. Results were evaluated in a novel trial set-up where subjects crossed over between intervention and control using a daily within-patient randomized approach, immediately from the start of CI rehabilitation. Methods Fourteen adult participants were included in this single-blinded, daily randomized clinical trial. Based on a fusion of pre-operative imaging and a post-operative cone beam CT scan (CBCT), mapping of electrical input was aligned to natural place-pitch arrangement in the individual cochlea. That is, adjustments to the CI's frequency allocation table were made so electrical stimulation of frequencies matched as closely as possible with corresponding acoustic locations in the cochlea. For a period of three months, starting at first fit, a scheme was implemented whereby the blinded subject crossed over between the experimental and standard fitting program using a daily randomized wearing schedule, and thus effectively acted as their own control. Speech outcomes (such as speech intelligibility in quiet and noise, sound quality and listening effort) were measured with both settings throughout the study period. Results On a group level, standard fitting obtained subject preference and showed superior results in all outcome measures. In contrast, two out of fourteen subjects preferred the imaging-based fitting and correspondingly had better speech understanding with this setting compared to standard fitting. Conclusion On average, cochlear implant fitting based on individual tonotopy did not elicit higher speech intelligibility but variability in individual results strengthen the potential for individualized frequency fitting. The novel trial design proved to be a suitable method for evaluation of experimental interventions in a prospective trial setup with cochlear implants.
Collapse
Affiliation(s)
- Lars Lambriks
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Marc van Hoof
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Joke Debruyne
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Miranda Janssen
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
- Department of Methodology and Statistics, Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands
| | - Josef Chalupper
- Advanced Bionics European Research Centre, Hannover, Germany
| | - Kiki van der Heijden
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Janny Hof
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Katja Hellingman
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Elke Devocht
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| | - Erwin George
- Department of ENT/Audiology, School for Mental Health and Neuroscience, Maastricht University Medical Centre, Maastricht, Netherlands
| |
Collapse
|
45
|
Tamati TN, Janse E, Başkent D. The relation between speaking-style categorization and speech recognition in adult cochlear implant users. JASA EXPRESS LETTERS 2023; 3:035201. [PMID: 37003708 DOI: 10.1121/10.0017439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
The current study examined the relation between speaking-style categorization and speech recognition in post-lingually deafened adult cochlear implant users and normal-hearing listeners tested under 4- and 8-channel acoustic noise-vocoder cochlear implant simulations. Across all listeners, better speaking-style categorization of careful read and casual conversation speech was associated with more accurate recognition of speech across those same two speaking styles. Findings suggest that some cochlear implant users and normal-hearing listeners under cochlear implant simulation may benefit from stronger encoding of indexical information in speech, enabling both better categorization and recognition of speech produced in different speaking styles.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology-Head and Neck Surgery, Vanderbilt University Medical Center, Nashville, Tennessee 37232, USA
| | - Esther Janse
- Centre for Language Studies, Radboud University Nijmegen, Nijmegen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands , ,
| |
Collapse
|
46
|
Chen YP, Schmidt F, Keitel A, Rösch S, Hauswald A, Weisz N. Speech intelligibility changes the temporal evolution of neural speech tracking. Neuroimage 2023; 268:119894. [PMID: 36693596 DOI: 10.1016/j.neuroimage.2023.119894] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/13/2022] [Accepted: 01/20/2023] [Indexed: 01/22/2023] Open
Abstract
Listening to speech with poor signal quality is challenging. Neural speech tracking of degraded speech has been used to advance the understanding of how brain processes and speech intelligibility are interrelated. However, the temporal dynamics of neural speech tracking and their relation to speech intelligibility are not clear. In the present MEG study, we exploited temporal response functions (TRFs), which has been used to describe the time course of speech tracking on a gradient from intelligible to unintelligible degraded speech. In addition, we used inter-related facets of neural speech tracking (e.g., speech envelope reconstruction, speech-brain coherence, and components of broadband coherence spectra) to endorse our findings in TRFs. Our TRF analysis yielded marked temporally differential effects of vocoding: ∼50-110 ms (M50TRF), ∼175-230 ms (M200TRF), and ∼315-380 ms (M350TRF). Reduction of intelligibility went along with large increases of early peak responses M50TRF, but strongly reduced responses in M200TRF. In the late responses M350TRF, the maximum response occurred for degraded speech that was still comprehensible then declined with reduced intelligibility. Furthermore, we related the TRF components to our other neural "tracking" measures and found that M50TRF and M200TRF play a differential role in the shifting center frequency of the broadband coherence spectra. Overall, our study highlights the importance of time-resolved computation of neural speech tracking and decomposition of coherence spectra and provides a better understanding of degraded speech processing.
Collapse
Affiliation(s)
- Ya-Ping Chen
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria.
| | - Fabian Schmidt
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
| | - Anne Keitel
- Psychology, School of Social Sciences, University of Dundee, DD1 4HN Dundee, UK
| | - Sebastian Rösch
- Department of Otorhinolaryngology, Paracelsus Medical University, 5020 Salzburg, Austria
| | - Anne Hauswald
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria
| | - Nathan Weisz
- Centre for Cognitive Neuroscience, University of Salzburg, 5020 Salzburg, Austria; Department of Psychology, University of Salzburg, 5020 Salzburg, Austria; Neuroscience Institute, Christian Doppler University Hospital, Paracelsus Medical University, 5020 Salzburg, Austria
| |
Collapse
|
47
|
The importance of temporal-fine structure to perceive time-compressed speech with and without the restoration of the syllabic rhythm. Sci Rep 2023; 13:2874. [PMID: 36806145 PMCID: PMC9938863 DOI: 10.1038/s41598-023-29755-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 02/09/2023] [Indexed: 02/20/2023] Open
Abstract
Intelligibility of time-compressed (TC) speech decreases with increasing speech rate. However, intelligibility can be restored by 'repackaging' the TC speech by inserting silences between the syllables so that the original 'rhythm' is restored. Although restoration of the speech rhythm affects solely the temporal envelope, it is unclear to which extent repackaging also affects the perception of the temporal-fine structure (TFS). Here we investigate to which extent TFS contributes to the perception of TC and repackaged TC speech in quiet. Intelligibility of TC sentences with a speech rate of 15.6 syllables per second (sps) and the repackaged sentences, by adding 100 ms of silence between the syllables of the TC speech (i.e., a speech rate of 6.1 sps), was assessed for three TFS conditions: the original TFS and the TFS conveyed by an 8- and 16-channel noise vocoder. An overall positive effect on intelligibility of both the repackaging process and of the amount of TFS available to the listener was observed. Furthermore, the benefit associated with the repackaging TC speech depended on the amount of TFS available. The results show TFS contributes significantly to the perception of fast speech even when the overall rhythm/envelope of TC speech is restored.
Collapse
|
48
|
Zhou N, Shi X, Dixit O, Firszt JB, Holden TA. Relationship between electrode position and temporal modulation sensitivity in cochlear implant users: Are close electrodes always better? Heliyon 2023; 9:e12467. [PMID: 36852047 PMCID: PMC9958279 DOI: 10.1016/j.heliyon.2022.e12467] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2022] [Revised: 10/21/2022] [Accepted: 12/11/2022] [Indexed: 12/24/2022] Open
Abstract
Temporal modulation sensitivity has been studied extensively for cochlear implant (CI) users due to its strong correlation to speech recognition outcomes. Previous studies reported that temporal modulation detection thresholds (MDTs) vary across the tonotopic axis and attributed this variation to patchy neural survival. However, correlates of neural health identified in animal models depend on electrode position in humans. Nonetheless, the relationship between MDT and electrode location has not been explored. We tested 13 ears for the effect of distance on modulation sensitivity, specifically targeting the question of whether electrodes closer to the modiolus are universally beneficial. Participants in this study were postlingually deafened and users of Cochlear Nucleus CIs. The distance of each electrode from the medial wall (MW) of the cochlea and mid-modiolar axis (MMA) was measured from scans obtained using computerized tomography (CT) imaging. The distance measures were correlated with slopes of spatial tuning curves measured on selected electrodes to investigate if electrode position accounts, at least in part, for the width of neural excitation. In accordance with previous findings, electrode position explained 24% of the variance in slopes of the spatial tuning curves. All functioning electrodes were also measured for MDTs. Five ears showed a positive correlation between MDTs and at least one distance measure across the array; 6 ears showed negative correlations and the remaining two ears showed no relationship. The ears showing positive MDT-distance correlations, thus benefiting from electrodes being close to the neural elements, were those who performed better on the two speech recognition measures, i.e., speech reception thresholds (SRTs) and recognition of the AzBio sentences. These results could suggest that ears able to take advantage of the proximal placement of electrodes are likely to have better speech recognition outcomes. Previous histological studies of humans demonstrated that speech recognition is correlated with spiral ganglion cell counts. Alternatively, ears with good speech recognition outcomes may have good overall neural health, which is a precondition for close electrodes to produce spatially confined neural excitation patterns that facilitate modulation sensitivity. These findings suggest that the methods to reduce channel interaction, e.g., perimodiolar electrode array or current focusing, may only be beneficial for a subgroup of CI users. Additionally, it suggests that estimating neural survival preoperatively is important for choosing the most appropriate electrode array type (perimodiolar vs. lateral wall) for optimal implant function.
Collapse
Affiliation(s)
- Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, 27834, USA
| | - Xuyang Shi
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, 27834, USA
| | - Omkar Dixit
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, 27834, USA
| | - Jill B Firszt
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri, 63110, USA
| | - Timothy A Holden
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Missouri, 63110, USA
| |
Collapse
|
49
|
Moberly AC, Varadarajan VV, Tamati TN. Noise-Vocoded Sentence Recognition and the Use of Context in Older and Younger Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:365-381. [PMID: 36475738 PMCID: PMC10023188 DOI: 10.1044/2022_jslhr-22-00184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 08/11/2022] [Accepted: 08/18/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE When listening to speech under adverse conditions, older adults, even with "age-normal" hearing, face challenges that may lead to poorer speech recognition than their younger peers. Older listeners generally demonstrate poorer suprathreshold auditory processing along with aging-related declines in neurocognitive functioning that may impair their ability to compensate using "top-down" cognitive-linguistic functions. This study explored top-down processing in older and younger adult listeners, specifically the use of semantic context during noise-vocoded sentence recognition. METHOD Eighty-four adults with age-normal hearing (45 young normal-hearing [YNH] and 39 older normal-hearing [ONH] adults) participated. Participants were tested for recognition accuracy for two sets of noise-vocoded sentence materials: one that was semantically meaningful and the other that was syntactically appropriate but semantically anomalous. Participants were also tested for hearing ability and for neurocognitive functioning to assess working memory capacity, speed of lexical access, inhibitory control, and nonverbal fluid reasoning, as well as vocabulary knowledge. RESULTS The ONH and YNH listeners made use of semantic context to a similar extent. Nonverbal reasoning predicted recognition of both meaningful and anomalous sentences, whereas pure-tone average contributed additionally to anomalous sentence recognition. None of the hearing, neurocognitive, or language measures significantly predicted the amount of context gain, computed as the difference score between meaningful and anomalous sentence recognition. However, exploratory cluster analyses demonstrated four listener profiles and suggested that individuals may vary in the strategies used to recognize speech under adverse listening conditions. CONCLUSIONS Older and younger listeners made use of sentence context to similar degrees. Nonverbal reasoning was found to be a contributor to noise-vocoded sentence recognition. However, different listeners may approach the problem of recognizing meaningful speech under adverse conditions using different strategies based on their hearing, neurocognitive, and language profiles. These findings provide support for the complexity of bottom-up and top-down interactions during speech recognition under adverse listening conditions.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | | | - Terrin N. Tamati
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
| |
Collapse
|
50
|
Guevara N, Truy E, Hoen M, Hermann R, Vandersteen C, Gallego S. Electrical Field Interactions during Adjacent Electrode Stimulations: eABR Evaluation in Cochlear Implant Users. J Clin Med 2023; 12:jcm12020605. [PMID: 36675534 PMCID: PMC9865217 DOI: 10.3390/jcm12020605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 01/06/2023] [Accepted: 01/10/2023] [Indexed: 01/15/2023] Open
Abstract
The present study investigates how electrically evoked Auditory Brainstem Responses (eABRs) can be used to measure local channel interactions along cochlear implant (CI) electrode arrays. eABRs were recorded from 16 experienced CI patients in response to electrical pulse trains delivered using three stimulation configurations: (1) single electrode stimulations (E11 or E13); (2) simultaneous stimulation from two electrodes separated by one (En and En+2, E11 and E13); and (3) stimulations from three consecutive electrodes (E11, E12, and E13). Stimulation level was kept constant at 70% electrical dynamic range (EDR) on the two flanking electrodes (E11 and E13) and was varied from 0 to 100% EDR on the middle electrode (E12). We hypothesized that increasing the middle electrode stimulation level would cause increasing local electrical interactions, reflected in characteristics of the evoked compound eABR. Results show that group averaged eABR wave III and V latency and amplitude were reduced when stimulation level at the middle electrode was increased, in particular when stimulation level on E12 reached 40, 70, and 100% EDR. Compound eABRs can provide a detailed individual quantification of electrical interactions occurring at specific electrodes along the CI electrode array. This approach allows a fine determination of interactions at the single electrode level potentially informing audiological decisions regarding mapping of CI systems.
Collapse
Affiliation(s)
- Nicolas Guevara
- Institut Universitaire de la Face et du Cou, Centre Hospitalier Universitaire de Nice, Université Côte d’Azur, 06100 Nice, France
| | - Eric Truy
- Department of Audiology and Otorhinolaryngology, Edouard Herriot Hospital, Lyon 1 University, 69437 Lyon, France
| | - Michel Hoen
- Clinical Evidence Department, Oticon Medical, 06220 Vallauris, France
- Correspondence:
| | - Ruben Hermann
- Department of Audiology and Otorhinolaryngology, Edouard Herriot Hospital, Lyon 1 University, 69437 Lyon, France
| | - Clair Vandersteen
- Institut Universitaire de la Face et du Cou, Centre Hospitalier Universitaire de Nice, Université Côte d’Azur, 06100 Nice, France
| | - Stéphane Gallego
- Institute for Readaptation Sciences and Techniques, Lyon 1 University, 69373 Lyon, France
| |
Collapse
|