1
|
Ajay EA, Thompson AC, Azees AA, Wise AK, Grayden DB, Fallon JB, Richardson RT. Combined-electrical optogenetic stimulation but not channelrhodopsin kinetics improves the fidelity of high rate stimulation in the auditory pathway in mice. Sci Rep 2024; 14:21028. [PMID: 39251630 PMCID: PMC11385946 DOI: 10.1038/s41598-024-71712-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 08/30/2024] [Indexed: 09/11/2024] Open
Abstract
Novel stimulation methods are needed to overcome the limitations of contemporary cochlear implants. Optogenetics is a technique that confers light sensitivity to neurons via the genetic introduction of light-sensitive ion channels. By controlling neural activity with light, auditory neurons can be activated with higher spatial precision. Understanding the behaviour of opsins at high stimulation rates is an important step towards their translation. To elucidate this, we compared the temporal characteristics of auditory nerve and inferior colliculus responses to optogenetic, electrical, and combined optogenetic-electrical stimulation in virally transduced mice expressing one of two channelrhodopsins, ChR2-H134R or ChIEF, at stimulation rates up to 400 pulses per second (pps). At 100 pps, optogenetic responses in ChIEF mice demonstrated higher fidelity, less change in latency, and greater response stability compared to responses in ChR2-H134R mice, but not at higher rates. Combined stimulation improved the response characteristics in both cohorts at 400 pps, although there was no consistent facilitation of electrical responses. Despite these results, day-long stimulation (up to 13 h) led to severe and non-recoverable deterioration of the optogenetic responses. The results of this study have significant implications for the translation of optogenetic-only and combined stimulation techniques for hearing loss.
Collapse
Affiliation(s)
- Elise A Ajay
- Bionics Institute, Melbourne, Australia
- Department of Biomedical Engineering and Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - Alex C Thompson
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - Ajmal A Azees
- Bionics Institute, Melbourne, Australia
- Department of Electrical and Biomedical Engineering, RMIT, Melbourne, Australia
| | - Andrew K Wise
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - David B Grayden
- Bionics Institute, Melbourne, Australia
- Department of Biomedical Engineering and Graeme Clark Institute, University of Melbourne, Melbourne, Australia
| | - James B Fallon
- Bionics Institute, Melbourne, Australia
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia
| | - Rachael T Richardson
- Bionics Institute, Melbourne, Australia.
- Department of Medical Bionics, University of Melbourne, Melbourne, Australia.
| |
Collapse
|
2
|
Thakkar T, Kan A, Litovsky RY. Lateralization of interaural time differences with mixed rates of stimulation in bilateral cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:1912. [PMID: 37002065 PMCID: PMC10036141 DOI: 10.1121/10.0017603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 02/23/2023] [Accepted: 02/25/2023] [Indexed: 05/18/2023]
Abstract
While listeners with bilateral cochlear implants (BiCIs) are able to access information in both ears, they still struggle to perform well on spatial hearing tasks when compared to normal hearing listeners. This performance gap could be attributed to the high stimulation rates used for speech representation in clinical processors. Prior work has shown that spatial cues, such as interaural time differences (ITDs), are best conveyed at low rates. Further, BiCI listeners are sensitive to ITDs with a mixture of high and low rates. However, it remains unclear whether mixed-rate stimuli are perceived as unitary percepts and spatially mapped to intracranial locations. Here, electrical pulse trains were presented on five, interaurally pitch-matched electrode pairs using research processors, at either uniformly high rates, low rates, or mixed rates. Eight post-lingually deafened adults were tested on perceived intracranial lateralization of ITDs ranging from 50 to 1600 μs. Extent of lateralization depended on the location of low-rate stimulation along the electrode array: greatest in the low- and mixed-rate configurations, and smallest in the high-rate configuration. All but one listener perceived a unitary auditory object. These findings suggest that a mixed-rate processing strategy can result in good lateralization and convey a unitary auditory object with ITDs.
Collapse
Affiliation(s)
- Tanvi Thakkar
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Alan Kan
- School of Engineering, Macquarie University, New South Wales 2109, Australia
| | - Ruth Y Litovsky
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
3
|
Nicastri M, Giallini I, Inguscio BMS, Turchetta R, Guerzoni L, Cuda D, Portanova G, Ruoppolo G, Dincer D'Alessandro H, Mancini P. The influence of auditory selective attention on linguistic outcomes in deaf and hard of hearing children with cochlear implants. Eur Arch Otorhinolaryngol 2023; 280:115-124. [PMID: 35831674 DOI: 10.1007/s00405-022-07463-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/23/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Auditory selective attention (ASA) is crucial to focus on significant auditory stimuli without being distracted by irrelevant auditory signals and plays an important role in language development. The present study aimed to investigate the unique contribution of ASA to the linguistic levels achieved by a group of cochlear implanted (CI) children. METHODS Thirty-four CI children with a median age of 10.05 years were tested using both the "Batteria per la Valutazione dell'Attenzione Uditiva e della Memoria di Lavoro Fonologica nell'età evolutiva-VAUM-ELF" to assess their ASA skills, and two Italian standardized tests to measure lexical and morphosyntactic skills. A regression analysis, including demographic and audiological variables, was conducted to assess the unique contribution of ASA to language skills. RESULTS The percentages of CI children with adequate ASA performances ranged from 50 to 29.4%. Bilateral CI children performed better than their monolateral peers. ASA skills contributed significantly to linguistic skills, accounting alone for the 25% of the observed variance. CONCLUSIONS The present findings are clinically relevant as they highlight the importance to assess ASA skills as early as possible, reflecting their important role in language development. Using simple clinical tools, ASA skills could be studied at early developmental stages. This may provide additional information to outcomes from traditional auditory tests and may allow us to implement specific training programs that could positively contribute to the development of neural mechanisms of ASA and, consequently, induce improvements in language skills.
Collapse
Affiliation(s)
- Maria Nicastri
- Department of Sense Organs, Sapienza University, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University, Rome, Italy
| | | | | | - Letizia Guerzoni
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | - Domenico Cuda
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | | | - Giovanni Ruoppolo
- I.R.C.C.S. San Raffaele Pisana, Via Nomentana, 401, 00162, Rome, Italy
| | | | | |
Collapse
|
4
|
Lee JI, Seist R, McInturff S, Lee DJ, Brown MC, Stankovic KM, Fried S. Magnetic stimulation allows focal activation of the mouse cochlea. eLife 2022; 11:76682. [PMID: 35608242 PMCID: PMC9177144 DOI: 10.7554/elife.76682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2021] [Accepted: 05/20/2022] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants (CIs) provide sound and speech sensations for patients with severe to profound hearing loss by electrically stimulating the auditory nerve. While most CI users achieve some degree of open set word recognition under quiet conditions, hearing that utilizes complex neural coding (e.g., appreciating music) has proved elusive, probably because of the inability of CIs to create narrow regions of spectral activation. Several novel approaches have recently shown promise for improving spatial selectivity, but substantial design differences from conventional CIs will necessitate much additional safety and efficacy testing before clinical viability is established. Outside the cochlea, magnetic stimulation from small coils (micro-coils) has been shown to confine activation more narrowly than that from conventional microelectrodes, raising the possibility that coil-based stimulation of the cochlea could improve the spectral resolution of CIs. To explore this, we delivered magnetic stimulation from micro-coils to multiple locations of the cochlea and measured the spread of activation utilizing a multielectrode array inserted into the inferior colliculus; responses to magnetic stimulation were compared to analogous experiments with conventional microelectrodes as well as to responses when presenting auditory monotones. Encouragingly, the extent of activation with micro-coils was ~60% narrower compared to electric stimulation and largely similar to the spread arising from acoustic stimulation. The dynamic range of coils was more than three times larger than that of electrodes, further supporting a smaller spread of activation. While much additional testing is required, these results support the notion that magnetic micro-coil CIs can produce a larger number of independent spectral channels and may therefore improve auditory outcomes. Further, because coil-based devices are structurally similar to existing CIs, fewer impediments to clinical translational are likely to arise.
Collapse
Affiliation(s)
- Jae-Ik Lee
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, United States
| | - Richard Seist
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, United States.,Department of Otorhinolaryngology - Head and Neck Surgery, Paracelsus Medical University, Salzburg, Austria
| | - Stephen McInturff
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, United States.,Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, United States
| | - Daniel J Lee
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, United States.,Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, United States
| | - M Christian Brown
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, United States.,Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, United States
| | - Konstantina M Stankovic
- Department of Otolaryngology - Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, United States.,Program in Speech and Hearing Bioscience and Technology, Harvard Medical School, Boston, United States.,Department of Otolaryngology - Head and Neck Surgery, Stanford University School of Medicine, Stanford, United States
| | - Shelley Fried
- Department of Neurosurgery, Massachusetts General Hospital, Harvard Medical School, Boston, United States.,Boston VA Medical Center, Boston, United States
| |
Collapse
|
5
|
Sensitivity to interaural time differences in the inferior colliculus of cochlear implanted rats with or without hearing experience. Hear Res 2021; 408:108305. [PMID: 34315027 DOI: 10.1016/j.heares.2021.108305] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 06/24/2021] [Accepted: 06/29/2021] [Indexed: 01/11/2023]
Abstract
For deaf patients cochlear implants (CIs) can restore substantial amounts of functional hearing. However, binaural hearing, and in particular, the perception of interaural time differences (ITDs) with current CIs has been found to be notoriously poor, especially in the event of early hearing loss. One popular hypothesis for these deficits posits that a lack of early binaural experience may be a principal cause of poor ITD perception in pre-lingually deaf CI patients. This is supported by previous electrophysiological studies done in neonatally deafened, bilateral CI-stimulated animals showing reduced ITD sensitivity. However, we have recently demonstrated that neonatally deafened CI rats can quickly learn to discriminate microsecond ITDs under optimized stimulation conditions which suggests that the inability of human CI users to make use of ITDs is not due to lack of binaural hearing experience during development. In the study presented here, we characterized ITD sensitivity and tuning of inferior colliculus neurons under bilateral CI stimulation of neonatally deafened and hearing experienced rats. The hearing experienced rats were not deafened prior to implantation. Both cohorts were implanted bilaterally between postnatal days 64-77 and recorded immediately following surgery. Both groups showed comparably large proportions of ITD sensitive multi-units in the inferior colliculus (Deaf: 84.8%, Hearing: 82.5%), and the strength of ITD tuning, quantified as mutual information between response and stimulus ITD, was independent of hearing experience. However, the shapes of tuning curves differed substantially between both groups. We observed four main clusters of tuning curves - trough, contralateral, central, and ipsilateral tuning. Interestingly, over 90% of multi-units for hearing experienced rats showed predominantly contralateral tuning, whereas as many as 50% of multi-units in neonatally deafened rats were centrally tuned. However, when we computed neural d' scores to predict likely limits on performance in sound lateralization tasks, we did not find that these differences in tuning shapes predicted worse psychoacoustic performance for the neonatally deafened animals. We conclude that, at least in rats, substantial amounts of highly precise, "innate" ITD sensitivity can be found even after profound hearing loss throughout infancy. However, ITD tuning curve shapes appear to be strongly influenced by auditory experience although substantial lateralization encoding is present even in its absence.
Collapse
|
6
|
The effect of increased channel interaction on speech perception with cochlear implants. Sci Rep 2021; 11:10383. [PMID: 34001987 PMCID: PMC8128897 DOI: 10.1038/s41598-021-89932-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Accepted: 04/29/2021] [Indexed: 11/30/2022] Open
Abstract
Cochlear implants (CIs) are neuroprostheses that partially restore hearing for people with severe-to-profound hearing loss. While CIs can provide good speech perception in quiet listening situations for many, they fail to do so in environments with interfering sounds for most listeners. Previous research suggests that this is due to detrimental interaction effects between CI electrode channels, limiting their function to convey frequency-specific information, but evidence is still scarce. In this study, an experimental manipulation called spectral blurring was used to increase channel interaction in CI listeners using Advanced Bionics devices with HiFocus 1J and MS electrode arrays to directly investigate its causal effect on speech perception. Instead of using a single electrode per channel as in standard CI processing, spectral blurring used up to 6 electrodes per channel simultaneously to increase the overlap between adjacent frequency channels as would occur in cases with severe channel interaction. Results demonstrated that this manipulation significantly degraded CI speech perception in quiet by 15% and speech reception thresholds in babble noise by 5 dB when all channels were blurred by a factor of 6. Importantly, when channel interaction was increased just on a subset of electrodes, speech scores were mostly unaffected and were only significantly degraded when the 5 most apical channels were blurred. These apical channels convey information up to 1 kHz at the apical end of the electrode array and are typically located at angular insertion depths of about 250 up to 500°. These results confirm and extend earlier findings indicating that CI speech perception may not benefit from deactivating individual channels along the array and that efforts should instead be directed towards reducing channel interaction per se and in particular for the most-apical electrodes. Hereby, causal methods such as spectral blurring could be used in future research to control channel interaction effects within listeners for evaluating compensation strategies.
Collapse
|
7
|
Goehring T, Arenberg JG, Carlyon RP. Using Spectral Blurring to Assess Effects of Channel Interaction on Speech-in-Noise Perception with Cochlear Implants. J Assoc Res Otolaryngol 2020; 21:353-371. [PMID: 32519088 PMCID: PMC7445227 DOI: 10.1007/s10162-020-00758-z] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 05/21/2020] [Indexed: 01/07/2023] Open
Abstract
Cochlear implant (CI) listeners struggle to understand speech in background noise. Interactions between electrode channels due to current spread increase the masking of speech by noise and lead to difficulties with speech perception. Strategies that reduce channel interaction therefore have the potential to improve speech-in-noise perception by CI listeners, but previous results have been mixed. We investigated the effects of channel interaction on speech-in-noise perception and its association with spectro-temporal acuity in a listening study with 12 experienced CI users. Instead of attempting to reduce channel interaction, we introduced spectral blurring to simulate some of the effects of channel interaction by adjusting the overlap between electrode channels at the input level of the analysis filters or at the output by using several simultaneously stimulated electrodes per channel. We measured speech reception thresholds in noise as a function of the amount of blurring applied to either all 15 electrode channels or to 5 evenly spaced channels. Performance remained roughly constant as the amount of blurring applied to all channels increased up to some knee point, above which it deteriorated. This knee point differed across listeners in a way that correlated with performance on a non-speech spectro-temporal task, and is proposed here as an individual measure of channel interaction. Surprisingly, even extreme amounts of blurring applied to 5 channels did not affect performance. The effects on speech perception in noise were similar for blurring at the input and at the output of the CI. The results are in line with the assumption that experienced CI users can make use of a limited number of effective channels of information and tolerate some deviations from their everyday settings when identifying speech in the presence of a masker. Furthermore, these findings may explain the mixed results by strategies that optimized or deactivated a small number of electrodes evenly distributed along the array by showing that blurring or deactivating one-third of the electrodes did not harm speech-in-noise performance.
Collapse
Affiliation(s)
- Tobias Goehring
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Julie G Arenberg
- Massachusetts Eye and Ear, Harvard Medical School, 243 Charles St, Boston, MA, 02114, USA
| | - Robert P Carlyon
- Cambridge Hearing Group, Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| |
Collapse
|
8
|
Goehring T, Keshavarzi M, Carlyon RP, Moore BCJ. Using recurrent neural networks to improve the perception of speech in non-stationary noise by people with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:705. [PMID: 31370586 PMCID: PMC6773603 DOI: 10.1121/1.5119226] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Accepted: 07/08/2019] [Indexed: 05/20/2023]
Abstract
Speech-in-noise perception is a major problem for users of cochlear implants (CIs), especially with non-stationary background noise. Noise-reduction algorithms have produced benefits but relied on a priori information about the target speaker and/or background noise. A recurrent neural network (RNN) algorithm was developed for enhancing speech in non-stationary noise and its benefits were evaluated for speech perception, using both objective measures and experiments with CI simulations and CI users. The RNN was trained using speech from many talkers mixed with multi-talker or traffic noise recordings. Its performance was evaluated using speech from an unseen talker mixed with different noise recordings of the same class, either babble or traffic noise. Objective measures indicated benefits of using a recurrent over a feed-forward architecture, and predicted better speech intelligibility with than without the processing. The experimental results showed significantly improved intelligibility of speech in babble noise but not in traffic noise. CI subjects rated the processed stimuli as significantly better in terms of speech distortions, noise intrusiveness, and overall quality than unprocessed stimuli for both babble and traffic noise. These results extend previous findings for CI users to mostly unseen acoustic conditions with non-stationary noise.
Collapse
Affiliation(s)
- Tobias Goehring
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Mahmoud Keshavarzi
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| | - Robert P Carlyon
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
9
|
Mehta AH, Oxenham AJ. Fundamental-frequency discrimination based on temporal-envelope cues: Effects of bandwidth and interference. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:EL423. [PMID: 30522318 PMCID: PMC6249132 DOI: 10.1121/1.5079569] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/14/2018] [Revised: 10/24/2018] [Accepted: 10/29/2018] [Indexed: 06/09/2023]
Abstract
Both music and speech perception rely on hearing out one pitch in the presence of others. Pitch discrimination of narrowband sounds based only on temporal-envelope cues is rendered nearly impossible by introducing interferers in both normal-hearing listeners and cochlear-implant (CI) users. This study tested whether performance improves in normal-hearing listeners if the target is presented over a broad spectral region. The results indicate that performance is still strongly affected by spectrally remote interferers, despite increases in bandwidth, suggesting that envelope-based pitch is unlikely to allow CI users to perceive pitch when multiple harmonic sounds are presented at once.
Collapse
Affiliation(s)
- Anahita H Mehta
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA ,
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, Minnesota 55455, USA ,
| |
Collapse
|
10
|
Feng L, Oxenham AJ. Auditory enhancement and the role of spectral resolution in normal-hearing listeners and cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:552. [PMID: 30180692 PMCID: PMC6072550 DOI: 10.1121/1.5048414] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 06/25/2018] [Accepted: 07/11/2018] [Indexed: 05/17/2023]
Abstract
Detection of a target tone in a simultaneous multi-tone masker can be improved by preceding the stimulus with the masker alone. The mechanisms underlying this auditory enhancement effect may enable the efficient detection of new acoustic events and may help to produce perceptual constancy under varying acoustic conditions. Previous work in cochlear-implant (CI) users has suggested reduced or absent enhancement, due perhaps to poor spatial resolution in the cochlea. This study used a supra-threshold enhancement paradigm that in normal-hearing listeners results in large enhancement effects, exceeding 20 dB. Results from vocoder simulations using normal-hearing listeners showed that near-normal enhancement was observed if the simulated spread of excitation was limited to spectral slopes no shallower than 24 dB/oct. No significant enhancement was observed on average in CI users with their clinical monopolar stimulation strategy. The variability in enhancement between CI users, and between electrodes in a single CI user, could not be explained by the spread of excitation, as estimated from auditory nerve evoked potentials. Enhancement remained small, but did reach statistical significance, under the narrower partial-tripolar stimulation strategy. The results suggest that enhancement may be at least partially restored by improvements in the spatial resolution of current CIs.
Collapse
Affiliation(s)
- Lei Feng
- Department of Psychology, University of Minnesota, N218 Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, N218 Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
11
|
Carlyon RP, Deeks JM, Undurraga J, Macherey O, van Wieringen A. Spatial Selectivity in Cochlear Implants: Effects of Asymmetric Waveforms and Development of a Single-Point Measure. J Assoc Res Otolaryngol 2017; 18:711-727. [PMID: 28755309 PMCID: PMC5612920 DOI: 10.1007/s10162-017-0625-9] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2016] [Accepted: 05/05/2017] [Indexed: 01/07/2023] Open
Abstract
Three experiments studied the extent to which cochlear implant users' spatial selectivity can be manipulated using asymmetric waveforms and tested an efficient method for comparing spatial selectivity produced by different stimuli. Experiment 1 measured forward-masked psychophysical tuning curves (PTCs) for a partial tripolar (pTP) probe. Maskers were presented on bipolar pairs separated by one unused electrode; waveforms were either symmetric biphasic ("SYM") or pseudomonophasic with the short high-amplitude phase being either anodic ("PSA") or cathodic ("PSC") on the more apical electrode. For the SYM masker, several subjects showed PTCs consistent with a bimodal excitation pattern, with discrete excitation peaks on each electrode of the bipolar masker pair. Most subjects showed significant differences between the PSA and PSC maskers consistent with greater masking by the electrode where the high-amplitude phase was anodic, but the pattern differed markedly across subjects. Experiment 2 measured masked excitation patterns for a pTP probe and either a monopolar symmetric biphasic masker ("MP_SYM") or pTP pseudomonophasic maskers where the short high-amplitude phase was either anodic ("TP_PSA") or cathodic ("TP_PSC") on the masker's central electrode. Four of the five subjects showed significant differences between the masker types, but again the pattern varied markedly across subjects. Because the levels of the maskers were chosen to produce the same masking of a probe on the same channel as the masker, it was correctly predicted that maskers that produce broader masking patterns would sound louder. Experiment 3 exploited this finding by using a single-point measure of spread of excitation to reveal significantly better spatial selectivity for TP_PSA compared to TP_PSC maskers.
Collapse
Affiliation(s)
- Robert P Carlyon
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Rd, Cambridge, CB1 3DA, UK
| | - John M Deeks
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Rd, Cambridge, CB1 3DA, UK.
| | - Jaime Undurraga
- ExpORL, Department of Neurosciences, KULeuven, Herestraat 49 bus 721, 3000, Leuven, Belgium
| | - Olivier Macherey
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Rd, Cambridge, CB1 3DA, UK
- LMA-CNRS, UPR 7051, Aix-Marseille University, Centrale Marseille, 4, Impasse Nikola Tesla, CS40006, 13453, Marseille Cedex 13, France
| | - Astrid van Wieringen
- ExpORL, Department of Neurosciences, KULeuven, Herestraat 49 bus 721, 3000, Leuven, Belgium
| |
Collapse
|
12
|
Feng L, Oxenham AJ. New perspectives on the measurement and time course of auditory enhancement. J Exp Psychol Hum Percept Perform 2015; 41:1696-708. [PMID: 26280269 DOI: 10.1037/xhp0000115] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A target sound can become more audible and may "pop out" from a simultaneously presented masker if the masker is presented first by itself, as a precursor. This phenomenon, known as auditory enhancement, may reflect the general perceptual principle of contrast enhancement, which facilitates adaptation to ongoing acoustic conditions and the detection of new events. Little is known about the mechanisms underlying enhancement, and potential confounding factors have made the size of the effect and its time course a point of contention. Here we measured enhancement as a function of precursor duration and delay between precursor offset and target onset, using 2 single-interval pitch comparison tasks, which involve either same-different or up-down judgments, to avoid the potential confounds of earlier studies. Although these 2 tasks elicit different levels of performance and may reflect different underlying mechanisms, they produced similar amounts of enhancement. The effect decreased with decreasing precursor duration, but remained present for precursors as short as 62.5 ms, and decreased with increasing gap between the precursor and target, but remained measurable 1 s after the precursor. Additional conditions, examining the effect of precursor/masker similarity and the possible role of grouping and cueing, suggest multiple sources of auditory enhancement.
Collapse
Affiliation(s)
- Lei Feng
- Department of Otolaryngology, University of Minnesota
| | | |
Collapse
|
13
|
Perception and coding of interaural time differences with bilateral cochlear implants. Hear Res 2015; 322:138-50. [DOI: 10.1016/j.heares.2014.10.004] [Citation(s) in RCA: 82] [Impact Index Per Article: 9.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2014] [Revised: 10/01/2014] [Accepted: 10/07/2014] [Indexed: 11/21/2022]
|
14
|
Kwon BJ, Perry TT. Identification and multiplicity of double vowels in cochlear implant users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2014; 57:1983-1996. [PMID: 24879064 DOI: 10.1044/2014_jslhr-h-12-0410] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2012] [Accepted: 04/28/2014] [Indexed: 06/03/2023]
Abstract
PURPOSE The present study examined cochlear implant (CI) users' perception of vowels presented concurrently (i.e., double vowels) to further our understanding of auditory grouping in electric hearing. METHOD Identification of double vowels and single vowels was measured with 10 CI subjects. Fundamental frequencies (F0s) of vowels were either 100 + 100 Hz or 100 + 300 Hz. Vowels were presented either synchronously or with a time delay. In "Double" sessions, subjects were given only double vowels. In "Double + Single" sessions, while double and single vowels were presented, subjects reported the number and identity of the vowel(s). In addition to clinical settings, stimuli were delivered via an experimental method that interleaved pulse streams of two vowels. RESULTS Although the time delay between vowels had a large effect on identification, the effect of change in fundamental frequency (ΔF0) was modest. Enumeration was poor in general, and identification of synchronous vowels was above chance in only the Double sessions with a priori knowledge about presentation. Interleaved presentation of vowel streams provided no benefit for identification and a marginal benefit for enumeration. CONCLUSIONS The results demonstrate the importance of episodic context for CI users. Unreliable perception of multiplicity observed in the present results suggests that auditory grouping in CIs may be driven by a schema-based process.
Collapse
|
15
|
Cheng MY, Spitzer JB, Shafiro V, Sheft S, Mancuso D. Reliability measure of a clinical test: Appreciation of Music in Cochlear Implantees (AMICI). J Am Acad Audiol 2014; 24:969-79. [PMID: 24384082 DOI: 10.3766/jaaa.24.10.8] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
PURPOSE The goals of this study were (1) to investigate the reliability of a clinical music perception test, Appreciation of Music in Cochlear Implantees (AMICI), and (2) examine associations between the perception of music and speech. AMICI was developed as a clinical instrument for assessing music perception in persons with cochlear implants (CIs). The test consists of four subtests: (1) music versus environmental noise discrimination, (2) musical instrument identification (closed-set), (3) musical style identification (closed-set), and (4) identification of musical pieces (open-set). To be clinically useful, it is crucial for AMICI to demonstrate high test-retest reliability, so that CI users can be assessed and retested after changes in maps or programming strategies. RESEARCH DESIGN Thirteen CI subjects were tested with AMICI for the initial visit and retested again 10-14 days later. Two speech perception tests (consonant-nucleus-consonant [CNC] and Bamford-Kowal-Bench Speech-in-Noise [BKB-SIN]) were also administered. DATA ANALYSIS Test-retest reliability and equivalence of the test's three forms were analyzed using paired t-tests and correlation coefficients, respectively. Correlation analysis was also conducted between results from the music and speech perception tests. RESULTS Results showed no significant difference between test and retest (p > 0.05) with adequate power (0.9) as well as high correlations between the three forms (Forms A and B, r = 0.91; Forms A and C, r = 0.91; Forms B and C, r = 0.95). Correlation analysis showed high correlation between AMICI and BKB-SIN (r = -0.71), and moderate correlation between AMICI and CNC (r = 0.4). CONCLUSIONS The study showed AMICI is highly reliable for assessing musical perception in CI users.
Collapse
Affiliation(s)
- Min-Yu Cheng
- Department of Communication Disorders and Sciences, Rush University, Chicago
| | - Jaclyn B Spitzer
- Columbia University College of Physicians and Surgeons, New York; Columbia University Medical Center of New York Presbyterian Hospital, New York; Department of Communication Sciences and Disorders, Montclair State University, Montclair, NJ
| | - Valeriy Shafiro
- Department of Communication Disorders and Sciences, Rush University, Chicago
| | - Stanley Sheft
- Department of Communication Disorders and Sciences, Rush University, Chicago
| | - Dean Mancuso
- Columbia University College of Physicians and Surgeons, New York
| |
Collapse
|
16
|
Irving S, Wise AK, Millard RE, Shepherd RK, Fallon JB. A partial hearing animal model for chronic electro-acoustic stimulation. J Neural Eng 2014; 11:046008. [PMID: 24921595 PMCID: PMC4116305 DOI: 10.1088/1741-2560/11/4/046008] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) have provided some auditory function to hundreds of thousands of people around the world. Although traditionally carried out only in profoundly deaf patients, the eligibility criteria for implantation have recently been relaxed to include many partially-deaf patients with useful levels of hearing. These patients receive both electrical stimulation from their implant and acoustic stimulation via their residual hearing (electro-acoustic stimulation; EAS) and perform very well. It is unclear how EAS improves speech perception over electrical stimulation alone, and little evidence exists about the nature of the interactions between electric and acoustic stimuli. Furthermore, clinical results suggest that some patients that undergo cochlear implantation lose some, if not all, of their residual hearing, reducing the advantages of EAS over electrical stimulation alone. A reliable animal model with clinically-relevant partial deafness combined with clinical CIs is important to enable these issues to be studied. This paper outlines such a model that has been successfully used in our laboratory. APPROACH This paper outlines a battery of techniques used in our laboratory to generate, validate and examine an animal model of partial deafness and chronic CI use. MAIN RESULTS Ototoxic deafening produced bilaterally symmetrical hearing thresholds in neonatal and adult animals. Electrical activation of the auditory system was confirmed, and all animals were chronically stimulated via adapted clinical CIs. Acoustic compound action potentials (CAPs) were obtained from partially-hearing cochleae, using the CI amplifier. Immunohistochemical analysis allows the effects of deafness and electrical stimulation on cell survival to be studied. SIGNIFICANCE This animal model has applications in EAS research, including investigating the functional interactions between electric and acoustic stimulation, and the development of techniques to maintain residual hearing following cochlear implantation. The ability to record CAPs via the CI has clinical direct relevance for obtaining objective measures of residual hearing.
Collapse
Affiliation(s)
- S Irving
- Bionics Institute, Melbourne, Australia. University of Melbourne, Melbourne, Australia
| | | | | | | | | |
Collapse
|
17
|
Modulation frequency discrimination with modulated and unmodulated interference in normal hearing and in cochlear-implant users. J Assoc Res Otolaryngol 2013; 14:591-601. [PMID: 23632651 DOI: 10.1007/s10162-013-0391-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2012] [Accepted: 04/08/2013] [Indexed: 10/26/2022] Open
Abstract
Differences in fundamental frequency (F0) provide an important cue for segregating simultaneous sounds. Cochlear implants (CIs) transmit F0 information primarily through the periodicity of the temporal envelope of the electrical pulse trains. Successful segregation of sounds with different F0s requires the ability to process multiple F0s simultaneously, but it is unknown whether CI users have this ability. This study measured modulation frequency discrimination thresholds for half-wave rectified sinusoidal envelopes modulated at 115 Hz in CI users and normal-hearing (NH) listeners. The target modulation was presented in isolation or in the presence of an interferer. Discrimination thresholds were strongly affected by the presence of an interferer, even when it was unmodulated and spectrally remote. Interferer modulation increased interference and often led to very high discrimination thresholds, especially when the interfering modulation frequency was lower than that of the target. Introducing a temporal offset between the interferer and the target led to at best modest improvements in performance in CI users and NH listeners. The results suggest no fundamental difference between acoustic and electric hearing in processing single or multiple envelope-based F0s, but confirm that differences in F0 are unlikely to provide a robust cue for perceptual segregation in CI users.
Collapse
|
18
|
Mc Laughlin M, Reilly RB, Zeng FG. Rate and onset cues can improve cochlear implant synthetic vowel recognition in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:1546-1560. [PMID: 23464025 PMCID: PMC3606303 DOI: 10.1121/1.4789940] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2011] [Revised: 01/15/2013] [Accepted: 01/16/2013] [Indexed: 06/01/2023]
Abstract
Understanding speech-in-noise is difficult for most cochlear implant (CI) users. Speech-in-noise segregation cues are well understood for acoustic hearing but not for electric hearing. This study investigated the effects of stimulation rate and onset delay on synthetic vowel-in-noise recognition in CI subjects. In experiment I, synthetic vowels were presented at 50, 145, or 795 pulse/s and noise at the same three rates, yielding nine combinations. Recognition improved significantly if the noise had a lower rate than the vowel, suggesting that listeners can use temporal gaps in the noise to detect a synthetic vowel. This hypothesis is supported by accurate prediction of synthetic vowel recognition using a temporal integration window model. Using lower rates a similar trend was observed in normal hearing subjects. Experiment II found that for CI subjects, a vowel onset delay improved performance if the noise had a lower or higher rate than the synthetic vowel. These results show that differing rates or onset times can improve synthetic vowel-in-noise recognition, indicating a need to develop speech processing strategies that encode or emphasize these cues.
Collapse
Affiliation(s)
- Myles Mc Laughlin
- Hearing and Speech Research Laboratory, Department of Otolaryngology-Head and Neck Surgery, University of California, Irvine, California 92697-5320, USA.
| | | | | |
Collapse
|
19
|
Gaudrain E, Carlyon RP. Using Zebra-speech to study sequential and simultaneous speech segregation in a cochlear-implant simulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 133:502-518. [PMID: 23297922 PMCID: PMC3785145 DOI: 10.1121/1.4770243] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Previous studies have suggested that cochlear implant users may have particular difficulties exploiting opportunities to glimpse clear segments of a target speech signal in the presence of a fluctuating masker. Although it has been proposed that this difficulty is associated with a deficit in linking the glimpsed segments across time, the details of this mechanism are yet to be explained. The present study introduces a method called Zebra-speech developed to investigate the relative contribution of simultaneous and sequential segregation mechanisms in concurrent speech perception, using a noise-band vocoder to simulate cochlear implants. One experiment showed that the saliency of the difference between the target and the masker is a key factor for Zebra-speech perception, as it is for sequential segregation. Furthermore, forward masking played little or no role, confirming that intelligibility was not limited by energetic masking but by across-time linkage abilities. In another experiment, a binaural cue was used to distinguish the target and the masker. It showed that the relative contribution of simultaneous and sequential segregation depended on the spectral resolution, with listeners relying more on sequential segregation when the spectral resolution was reduced. The potential of Zebra-speech as a segregation enhancement strategy for cochlear implants is discussed.
Collapse
Affiliation(s)
- Etienne Gaudrain
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, CB2 7EF Cambridge, United Kingdom.
| | | |
Collapse
|
20
|
Ihlefeld A, Litovsky RY. Interaural level differences do not suffice for restoring spatial release from masking in simulated cochlear implant listening. PLoS One 2012; 7:e45296. [PMID: 23028914 PMCID: PMC3447935 DOI: 10.1371/journal.pone.0045296] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2012] [Accepted: 08/21/2012] [Indexed: 11/18/2022] Open
Abstract
Spatial release from masking refers to a benefit for speech understanding. It occurs when a target talker and a masker talker are spatially separated. In those cases, speech intelligibility for target speech is typically higher than when both talkers are at the same location. In cochlear implant listeners, spatial release from masking is much reduced or absent compared with normal hearing listeners. Perhaps this reduced spatial release occurs because cochlear implant listeners cannot effectively attend to spatial cues. Three experiments examined factors that may interfere with deploying spatial attention to a target talker masked by another talker. To simulate cochlear implant listening, stimuli were vocoded with two unique features. First, we used 50-Hz low-pass filtered speech envelopes and noise carriers, strongly reducing the possibility of temporal pitch cues; second, co-modulation was imposed on target and masker utterances to enhance perceptual fusion between the two sources. Stimuli were presented over headphones. Experiments 1 and 2 presented high-fidelity spatial cues with unprocessed and vocoded speech. Experiment 3 maintained faithful long-term average interaural level differences but presented scrambled interaural time differences with vocoded speech. Results show a robust spatial release from masking in Experiments 1 and 2, and a greatly reduced spatial release in Experiment 3. Faithful long-term average interaural level differences were insufficient for producing spatial release from masking. This suggests that appropriate interaural time differences are necessary for restoring spatial release from masking, at least for a situation where there are few viable alternative segregation cues.
Collapse
Affiliation(s)
- Antje Ihlefeld
- University of Wisconsin Waisman Center, Madison, Wisconsin, United States of America
- New York University, Center for Neural Science, New York, New York, United States of America
| | - Ruth Y. Litovsky
- University of Wisconsin Waisman Center, Madison, Wisconsin, United States of America
| |
Collapse
|
21
|
Milczynski M, Chang JE, Wouters J, van Wieringen A. Perception of Mandarin Chinese with cochlear implants using enhanced temporal pitch cues. Hear Res 2012; 285:1-12. [PMID: 22361414 DOI: 10.1016/j.heares.2012.02.006] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/23/2011] [Revised: 01/22/2012] [Accepted: 02/08/2012] [Indexed: 11/25/2022]
Abstract
A cochlear implant (CI) signal processing strategy named F0 modulation (F0mod) was compared with the advanced combination encoder (ACE) strategy in a group of four post-lingually deafened Mandarin Chinese speaking CI listeners. F0 provides an enhanced temporal pitch cue by amplitude modulating the multichannel electrical stimulation pattern at the fundamental frequency (F0) of the incoming speech signal. Word and sentence recognition tests were carried out in quiet and in noise. The responses for the word-recognition test were further segmented into phoneme and tone scores. Off-line implementations of ACE and F0mod were used, and electrical stimulation patterns were directly streamed to the CI subject's implant. To focus on the feasibility of enhanced temporal cues for tonal language perception, idealized F0 information that was extracted from speech tokens in quiet was used in the F0mod processing of speech-in-noise mixtures. The results indicated significantly better lexical tone perception with the F0mod strategy than with ACE for the male voice (p<0.05). No significant differences in sentence recognition were found between F0mod and ACE.
Collapse
Affiliation(s)
- Matthias Milczynski
- ExpORL, Dept. Neurosciences, K.U.Leuven, O & N 2, Herestraat 49 bus 721, B-3000 Leuven, Belgium.
| | | | | | | |
Collapse
|
22
|
Goupell MJ, Mostardi MJ. Evidence of the enhancement effect in electrical stimulation via electrode matching (L). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:1007-1010. [PMID: 22352475 PMCID: PMC3292600 DOI: 10.1121/1.3672650] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2011] [Revised: 11/15/2011] [Accepted: 11/17/2011] [Indexed: 05/28/2023]
Abstract
The ability to match a pulsing electrode during multi-electrode stimulation through a research interface was measured in seven cochlear-implant (CI) users. Five listeners were relatively good at the task and two could not perform the task. Performance did not vary as a function of the number of electrodes or stimulation level. Performance on the matching task was not correlated to performance on an electrode-discrimination task. The listeners may have experienced the auditory enhancement effect, and this may have implications for speech recognition in noise for CI users.
Collapse
Affiliation(s)
- Matthew J Goupell
- Waisman Center, University of Wisconsin, 1500 Highland Avenue, Madison, Wisconsin 53705, USA.
| | | |
Collapse
|
23
|
Ihlefeld A, Shinn-Cunningham BG, Carlyon RP. Comodulation masking release in speech identification with real and simulated cochlear-implant hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 131:1315-1324. [PMID: 22352505 PMCID: PMC9014238 DOI: 10.1121/1.3676701] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/13/2011] [Revised: 12/21/2011] [Accepted: 12/22/2011] [Indexed: 05/31/2023]
Abstract
For normal-hearing (NH) listeners, masker energy outside the spectral region of a target signal can improve target detection and identification, a phenomenon referred to as comodulation masking release (CMR). This study examined whether, for cochlear implant (CI) listeners and for NH listeners presented with a "noise vocoded" CI simulation, speech identification in modulated noise is improved by a co-modulated flanking band. In Experiment 1, NH listeners identified noise-vocoded speech in a background of on-target noise with or without a flanking narrow band of noise outside the spectral region of the target. The on-target noise and flanker were either 16-Hz square-wave modulated with the same phase or were unmodulated; the speech was taken from a closed-set corpus. Performance was better in modulated than in unmodulated noise, and this difference was slightly greater when the comodulated flanker was present, consistent with a small CMR of about 1.7 dB for noise-vocoded speech. Experiment 2, which tested CI listeners using the same speech materials, found no advantage for modulated versus unmodulated maskers and no CMR. Thus although NH listeners can benefit from CMR even for speech signals with reduced spectro-temporal detail, no CMR was observed for CI users.
Collapse
Affiliation(s)
- Antje Ihlefeld
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom.
| | | | | |
Collapse
|
24
|
Best V, Laback B, Majdak P. Binaural interference in bilateral cochlear-implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:2939-50. [PMID: 22087922 DOI: 10.1121/1.3641400] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
This work was aimed at determining whether binaural interference occurs in electric hearing, and if so, whether it occurs as a consequence of perceptual grouping (central explanation) or if it is related to the spread of excitation in the cochlea (peripheral explanation). Six bilateral cochlear-implant listeners completed a series of experiments in which they judged the lateral position of a target pulse train, lateralized via interaural time or level differences, in the presence of an interfering diotic pulse train. The target and interferer were presented at widely separated electrode pairs (one basal and one apical). The results are broadly similar to those reported for acoustic hearing. All listeners but one showed significant binaural interference in at least one of the stimulus conditions. In all cases of interference, a robust recovery was observed when the interferer was presented as part of an ongoing stream of identical pulse trains, suggesting that the interference was at least partly centrally mediated. Overall, the results suggest that both simultaneous and sequential grouping mechanisms operate in electric hearing, at least for stimuli with a wide tonotopic separation.
Collapse
Affiliation(s)
- Virginia Best
- School of Medical Sciences, University of Sydney, Sydney, NSW 2006, Australia.
| | | | | |
Collapse
|
25
|
Wang S, Xu L, Mannell R. Relative contributions of temporal envelope and fine structure cues to lexical tone recognition in hearing-impaired listeners. J Assoc Res Otolaryngol 2011; 12:783-94. [PMID: 21833816 DOI: 10.1007/s10162-011-0285-0] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2010] [Accepted: 07/25/2011] [Indexed: 11/24/2022] Open
Abstract
It has been reported that normal-hearing Chinese speakers base their lexical tone recognition on fine structure regardless of temporal envelope cues. However, a few psychoacoustic and perceptual studies have demonstrated that listeners with sensorineural hearing impairment may have an impaired ability to use fine structure information, whereas their ability to use temporal envelope information is close to normal. The purpose of this study is to investigate the relative contributions of temporal envelope and fine structure cues to lexical tone recognition in normal-hearing and hearing-impaired native Mandarin Chinese speakers. Twenty-two normal-hearing subjects and 31 subjects with various degrees of sensorineural hearing loss participated in the study. Sixteen sets of Mandarin monosyllables with four tone patterns for each were processed through a "chimeric synthesizer" in which temporal envelope from a monosyllabic word of one tone was paired with fine structure from the same monosyllable of other tones. The chimeric tokens were generated in the three channel conditions (4, 8, and 16 channels). Results showed that differences in tone responses among the three channel conditions were minor. On average, 90.9%, 70.9%, 57.5%, and 38.2% of tone responses were consistent with fine structure for normal-hearing, moderate, moderate to severe, and severely hearing-impaired groups respectively, whereas 6.8%, 21.1%, 31.4%, and 44.7% of tone responses were consistent with temporal envelope cues for the above-mentioned groups. Tone responses that were consistent neither with temporal envelope nor fine structure had averages of 2.3%, 8.0%, 11.1%, and 17.1% for the above-mentioned groups of subjects. Pure-tone average thresholds were negatively correlated with tone responses that were consistent with fine structure, but were positively correlated with tone responses that were based on the temporal envelope cues. Consistent with the idea that the spectral resolvability is responsible for fine structure coding, these results demonstrated that, as hearing loss becomes more severe, lexical tone recognition relies increasingly on temporal envelope rather than fine structure cues due to the widened auditory filters.
Collapse
Affiliation(s)
- Shuo Wang
- Beijing Institute of Otolaryngology, Key Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, People's Republic of China
| | | | | |
Collapse
|
26
|
Kwon BJ. Effects of electrode separation between speech and noise signals on consonant identification in cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:3258-3267. [PMID: 20000939 PMCID: PMC2803724 DOI: 10.1121/1.3257200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2007] [Revised: 08/31/2009] [Accepted: 09/25/2009] [Indexed: 05/26/2023]
Abstract
The aim of the present study was to examine cochlear implant (CI) users' perceptual segregation of speech from background noise with differing degrees of electrode separation between speech and noise. Eleven users of the nucleus CI system were tested on consonant identification using an experimental processing scheme called "multi-stream processing" in which speech and noise stimuli were processed separately and interleaved. Speech was presented to either ten (every other electrode) or six electrodes (every fourth electrode). Noise was routed to either the same (the "overlapped" condition) or a different set of electrodes (the "interlaced" condition), where speech and noise electrodes were separated by one- and two-electrode spacings for ten- and six-electrode presentations, respectively. Results indicated a small but significant improvement in consonant recognition (5%-10%) in the interlaced condition with a two-electrode spacing (approximately 1.1 mm) in two subjects. It appears that the results were influenced by peripheral channel interactions, partially accounting for individual variability. Although the overall effect was small and observed from a small number of subjects, the present study demonstrated that CI users' performance on segregating the target from the background might be improved if these sounds were presented with sufficient peripheral separation.
Collapse
Affiliation(s)
- Bom Jun Kwon
- Department of Communication Sciences and Disorders, University of Utah, 390 S 1530 E, Salt Lake City, Utah 84112, USA.
| |
Collapse
|
27
|
Cooper HR, Roberts B. Simultaneous grouping in cochlear implant listeners: can abrupt changes in level be used to segregate components from a complex tone? J Assoc Res Otolaryngol 2009; 11:89-100. [PMID: 19826870 DOI: 10.1007/s10162-009-0190-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2009] [Accepted: 09/21/2009] [Indexed: 12/01/2022] Open
Abstract
A sudden increase in the amplitude of a component often causes its segregation from a complex tone, and shorter rise times enhance this effect. We explored whether this also occurs in implant listeners (n = 8). Condition 1 used a 3.5-s "complex tone" comprising concurrent stimulation on five electrodes distributed across the array of the Nucleus CI24 implant. For each listener, the baseline stimulus level on each electrode was set at 50% of the dynamic range (DR). Two 1-s increments of 12.5%, 25%, or 50% DR were introduced in succession on adjacent electrodes within the "inner" three of those activated. Both increments had rise and fall times of 30 and 970 ms or vice versa. Listeners reported which increment was higher in pitch. Some listeners performed above chance for all increment sizes, but only for 50% increments did all listeners perform above chance. No significant effect of rise time was found. Condition 2 replaced amplitude increments with decrements. Only three listeners performed above chance even for 50% decrements. One exceptional listener performed well for 50% decrements with fall and rise times of 970 and 30 ms but around chance for fall and rise times of 30 and 970 ms, indicating successful discrimination based on a sudden rise back to baseline stimulation. Overall, the results suggest that implant listeners can use amplitude changes against a constant background to pick out components from a complex, but generally these must be large compared with those required in normal hearing. For increments, performance depended mainly on above-baseline stimulation of the target electrodes, not rise time. With one exception, performance for decrements was typically very poor.
Collapse
Affiliation(s)
- Huw R Cooper
- Psychology, School of Life and Health Sciences, Aston University, Birmingham, B4 7ET, UK.
| | | |
Collapse
|
28
|
Cooper HR, Roberts B. Auditory stream segregation in cochlear implant listeners: measures based on temporal discrimination and interleaved melody recognition. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:1975-1987. [PMID: 19813809 DOI: 10.1121/1.3203210] [Citation(s) in RCA: 22] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The evidence that cochlear implant listeners routinely experience stream segregation is limited and equivocal. Streaming in these listeners was explored using tone sequences matched to the center frequencies of the implant's 22 electrodes. Experiment 1 measured temporal discrimination for short (ABA triplet) and longer (12 AB cycles) sequences (tone/silence durations = 60/40 ms). Tone A stimulated electrode 11; tone B stimulated one of 14 electrodes. On each trial, one sequence remained isochronous, and tone B was delayed in the other; listeners had to identify the anisochronous interval. The delay was introduced in the second half of the longer sequences. Prior build-up of streaming should cause thresholds to rise more steeply with increasing electrode separation, but no interaction with sequence length was found. Experiment 2 required listeners to identify which of two target sequences was present when interleaved with distractors (tone/silence durations = 120/80 ms). Accuracy was high for isolated targets, but most listeners performed near chance when loudness-matched distractors were added, even when remote from the target. Only a substantial reduction in distractor level improved performance, and this effect did not interact with target-distractor separation. These results indicate that implantees often do not achieve stream segregation, even in relatively unchallenging tasks.
Collapse
Affiliation(s)
- Huw R Cooper
- Psychology, School of Life and Health Sciences, Aston University, Birmingham B4 7ET, United Kingdom
| | | |
Collapse
|
29
|
Luo X, Fu QJ. Concurrent-vowel and tone recognitions in acoustic and simulated electric hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 125:3223-3233. [PMID: 19425665 PMCID: PMC2806442 DOI: 10.1121/1.3106534] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/16/2008] [Revised: 02/27/2009] [Accepted: 03/05/2009] [Indexed: 05/27/2023]
Abstract
Because of the poor spectral resolution in cochlear implants (CIs), fundamental frequency (F0) cues are not well preserved. Chinese-speaking CI users may have great difficulty understanding speech produced by competing talkers, due to conflicting tones. In this study, normal-hearing listeners' concurrent Chinese syllable recognition was measured with unprocessed speech and CI simulations. Concurrent syllables were constructed by summing two vowels from a male talker (with identical mean F0's) or one vowel from each of a male and a female talker (with a relatively large F0 separation). CI signal processing was simulated using four- and eight-channel noise-band vocoders; the degraded spectral resolution may limit listeners' ability to utilize talker and/or tone differences. The results showed that concurrent speech recognition was significantly poorer with the CI simulations than with unprocessed speech. There were significant interactions between the talker and speech-processing conditions, e.g., better tone and syllable recognitions with the male-female condition for unprocessed speech, and with the male-male condition for eight-channel speech. With the CI simulations, competing tones interfered with concurrent-tone and syllable recognitions, but not vowel recognition. Given limited pitch cues, subjects were unable to use F0 differences between talkers or tones for concurrent Chinese syllable recognition.
Collapse
Affiliation(s)
- Xin Luo
- Communication and Auditory Neuroscience, House Ear Institute, 2100 West Third Street, Los Angeles, California 90057, USA.
| | | |
Collapse
|
30
|
Abstract
A common complaint among listeners with hearing loss (HL) is that they have difficulty communicating in common social settings. This article reviews how normal-hearing listeners cope in such settings, especially how they focus attention on a source of interest. Results of experiments with normal-hearing listeners suggest that the ability to selectively attend depends on the ability to analyze the acoustic scene and to form perceptual auditory objects properly. Unfortunately, sound features important for auditory object formation may not be robustly encoded in the auditory periphery of HL listeners. In turn, impaired auditory object formation may interfere with the ability to filter out competing sound sources. Peripheral degradations are also likely to reduce the salience of higher-order auditory cues such as location, pitch, and timbre, which enable normal-hearing listeners to select a desired sound source out of a sound mixture. Degraded peripheral processing is also likely to increase the time required to form auditory objects and focus selective attention so that listeners with HL lose the ability to switch attention rapidly (a skill that is particularly important when trying to participate in a lively conversation). Finally, peripheral deficits may interfere with strategies that normal-hearing listeners employ in complex acoustic settings, including the use of memory to fill in bits of the conversation that are missed. Thus, peripheral hearing deficits are likely to cause a number of interrelated problems that challenge the ability of HL listeners to communicate in social settings requiring selective attention.
Collapse
Affiliation(s)
- Barbara G Shinn-Cunningham
- Hearing Research Center, Departments of Cognitive and Neural Systems and Biomedical Engineering, Boston University, Boston, MA 02421, USA.
| | | |
Collapse
|
31
|
Gaudrain E, Grimault N, Healy EW, Béra JC. Streaming of vowel sequences based on fundamental frequency in a cochlear-implant simulation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 124:3076-87. [PMID: 19045793 PMCID: PMC2677355 DOI: 10.1121/1.2988289] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/17/2007] [Revised: 08/21/2008] [Accepted: 08/22/2008] [Indexed: 05/27/2023]
Abstract
Cochlear-implant (CI) users often have difficulties perceiving speech in noisy environments. Although this problem likely involves auditory scene analysis, few studies have examined sequential segregation in CI listening situations. The present study aims to assess the possible role of fundamental frequency (F(0)) cues for the segregation of vowel sequences, using a noise-excited envelope vocoder that simulates certain aspects of CI stimulation. Obligatory streaming was evaluated using an order-naming task in two experiments involving normal-hearing subjects. In the first experiment, it was found that streaming did not occur based on F(0) cues when natural-duration vowels were processed to reduce spectral cues using the vocoder. In the second experiment, shorter duration vowels were used to enhance streaming. Under these conditions, F(0)-related streaming appeared even when vowels were processed to reduce spectral cues. However, the observed segregation could not be convincingly attributed to temporal periodicity cues. A subsequent analysis of the stimuli revealed that an F(0)-related spectral cue could have elicited the observed segregation. Thus, streaming under conditions of severely reduced spectral cues, such as those associated with CIs, may potentially occur as a result of this particular cue.
Collapse
Affiliation(s)
- Etienne Gaudrain
- Neurosciences Sensorielles, Comportement, Cognition, CNRS UMR 5020, Universite Lyon 1, 50 Avenue Tony Garnier, 69366 Lyon Cedex 07, France
| | | | | | | |
Collapse
|
32
|
Larsen E, Cedolin L, Delgutte B. Pitch representations in the auditory nerve: two concurrent complex tones. J Neurophysiol 2008; 100:1301-19. [PMID: 18632887 PMCID: PMC2544468 DOI: 10.1152/jn.01361.2007] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Pitch differences between concurrent sounds are important cues used in auditory scene analysis and also play a major role in music perception. To investigate the neural codes underlying these perceptual abilities, we recorded from single fibers in the cat auditory nerve in response to two concurrent harmonic complex tones with missing fundamentals and equal-amplitude harmonics. We investigated the efficacy of rate-place and interspike-interval codes to represent both pitches of the two tones, which had fundamental frequency (F0) ratios of 15/14 or 11/9. We relied on the principle of scaling invariance in cochlear mechanics to infer the spatiotemporal response patterns to a given stimulus from a series of measurements made in a single fiber as a function of F0. Templates created by a peripheral auditory model were used to estimate the F0s of double complex tones from the inferred distribution of firing rate along the tonotopic axis. This rate-place representation was accurate for F0s greater, similar900 Hz. Surprisingly, rate-based F0 estimates were accurate even when the two-tone mixture contained no resolved harmonics, so long as some harmonics were resolved prior to mixing. We also extended methods used previously for single complex tones to estimate the F0s of concurrent complex tones from interspike-interval distributions pooled over the tonotopic axis. The interval-based representation was accurate for F0s less, similar900 Hz, where the two-tone mixture contained no resolved harmonics. Together, the rate-place and interval-based representations allow accurate pitch perception for concurrent sounds over the entire range of human voice and cat vocalizations.
Collapse
Affiliation(s)
- Erik Larsen
- Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
| | | | | |
Collapse
|
33
|
Gaudrain E, Grimault N, Healy EW, Béra JC. Effect of spectral smearing on the perceptual segregation of vowel sequences. Hear Res 2007; 231:32-41. [PMID: 17597319 PMCID: PMC2128787 DOI: 10.1016/j.heares.2007.05.001] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2006] [Revised: 04/30/2007] [Accepted: 05/10/2007] [Indexed: 11/28/2022]
Abstract
Although segregation of both simultaneous and sequential speech items may be involved in the reception of speech in noisy environments, research on the latter is relatively sparse. Further, previous studies examining the ability of hearing-impaired listeners to form distinct auditory streams have produced mixed results. Finally, there is little work investigating streaming in cochlear implant recipients, who also have poor frequency resolution. The present study focused on the mechanisms involved in the segregation of vowel sequences and potential limitations to segregation associated with poor frequency resolution. An objective temporal-order paradigm was employed in which listeners reported the order of constituent vowels within a sequence. In Experiment 1, it was found that fundamental frequency based mechanisms contribute to segregation. In Experiment 2, reduced frequency tuning often associated with hearing impairment was simulated in normal-hearing listeners. In that experiment, it was found that spectral smearing of the vowels increased accurate identification of their order, presumably by reducing the tendency to form separate auditory streams. These experiments suggest that a reduction in spectral resolution may result in a reduced ability to form separate auditory streams, which may contribute to the difficulties of hearing-impaired listeners, and probably cochlear implant recipients as well, in multi-talker cocktail-party situations.
Collapse
Affiliation(s)
- Etienne Gaudrain
- Neurosciences & Systèmes sensoriels — CNRS UMR 5020, Université Claude Bernard — Lyon 1, France
| | - Nicolas Grimault
- Neurosciences & Systèmes sensoriels — CNRS UMR 5020, Université Claude Bernard — Lyon 1, France
| | - Eric W. Healy
- Speech Psychoacoustics Laboratory, Department of Communication Sciences and Disorders, University of South Carolina, Columbia, 29208 USA
| | | |
Collapse
|