1
|
Jeon EK, Driscoll V, Mussoi BS, Scheperle R, Guthe E, Gfeller K, Abbas PJ, Brown CJ. Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training. Ear Hear 2024:00003446-990000000-00316. [PMID: 39044323 DOI: 10.1097/aud.0000000000001569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures. DESIGN This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics. RESULTS In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training. CONCLUSIONS This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Virginia Driscoll
- Department of Music Education and Therapy, East Carolina University, Greenville, North Carolina, USA
| | - Bruna S Mussoi
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Rachel Scheperle
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Emily Guthe
- Department of Music Therapy, Cleveland State University, Cleveland, Ohio, USA
| | - Kate Gfeller
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Paul J Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Carolyn J Brown
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Noble AR, Halverson DM, Resnick J, Broncheau M, Rubinstein JT, Horn DL. Spectral Resolution and Speech Perception in Cochlear Implanted School-Aged Children. Otolaryngol Head Neck Surg 2024; 170:230-238. [PMID: 37365946 PMCID: PMC10836047 DOI: 10.1002/ohn.408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 05/03/2023] [Accepted: 06/04/2023] [Indexed: 06/28/2023]
Abstract
OBJECTIVE Cochlear implantation of prelingually deaf infants provides auditory input sufficient to develop spoken language; however, outcomes remain variable. Inability to participate in speech perception testing limits testing device efficacy in young listeners. In postlingually implanted adults (aCI), speech perception correlates with spectral resolution an ability that relies independently on frequency resolution (FR) and spectral modulation sensitivity (SMS). The correlation of spectral resolution to speech perception is unknown in prelingually implanted children (cCI). In this study, FR and SMS were measured using a spectral ripple discrimination (SRD) task and were correlated with vowel and consonant identification. It was hypothesized that prelingually deaf cCI would show immature SMS relative to postlingually deaf aCI and that FR would correlate with speech identification. STUDY DESIGN Cross-sectional study. SETTING In-person, booth testing. METHODS SRD was used to determine the highest spectral ripple density perceived at various modulation depths. FR and SMS were derived from spectral modulation transfer functions. Vowel and consonant identification was measured; SRD performance and speech identification were analyzed for correlation. RESULTS Fifteen prelingually implanted cCI and 13 postlingually implanted aCI were included. FR and SMS were similar between cCI and aCI. Better FR was associated with better speech identification for most measures. CONCLUSION Prelingually implanted cCI demonstrated adult-like FR and SMS; additionally, FR correlated with speech identification. FR may be a measure of CI efficacy in young listeners.
Collapse
Affiliation(s)
- Anisha R. Noble
- Division of Pediatric Otolaryngology – Head and Neck Surgery, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, USA
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Destinee M. Halverson
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jesse Resnick
- Department of Internal Medicine, University of Michigan, Ann Arbor, MI, USA
| | - Mariette Broncheau
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - Jay T. Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA, USA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
3
|
Anderson SR, Burg E, Suveg L, Litovsky RY. Review of Binaural Processing With Asymmetrical Hearing Outcomes in Patients With Bilateral Cochlear Implants. Trends Hear 2024; 28:23312165241229880. [PMID: 38545645 PMCID: PMC10976506 DOI: 10.1177/23312165241229880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 04/01/2024] Open
Abstract
Bilateral cochlear implants (BiCIs) result in several benefits, including improvements in speech understanding in noise and sound source localization. However, the benefit bilateral implants provide among recipients varies considerably across individuals. Here we consider one of the reasons for this variability: difference in hearing function between the two ears, that is, interaural asymmetry. Thus far, investigations of interaural asymmetry have been highly specialized within various research areas. The goal of this review is to integrate these studies in one place, motivating future research in the area of interaural asymmetry. We first consider bottom-up processing, where binaural cues are represented using excitation-inhibition of signals from the left ear and right ear, varying with the location of the sound in space, and represented by the lateral superior olive in the auditory brainstem. We then consider top-down processing via predictive coding, which assumes that perception stems from expectations based on context and prior sensory experience, represented by cascading series of cortical circuits. An internal, perceptual model is maintained and updated in light of incoming sensory input. Together, we hope that this amalgamation of physiological, behavioral, and modeling studies will help bridge gaps in the field of binaural hearing and promote a clearer understanding of the implications of interaural asymmetry for future research on optimal patient interventions.
Collapse
Affiliation(s)
- Sean R. Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Physiology and Biophysics, University of Colorado Anschutz Medical School, Aurora, CO, USA
| | - Emily Burg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Lukas Suveg
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, USA
- Department of Surgery, Division of Otolaryngology, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
4
|
Benoit C, Carlson RJ, King MC, Horn DL, Rubinstein JT. Behavioral characterization of the cochlear amplifier lesion due to loss of function of stereocilin (STRC) in human subjects. Hear Res 2023; 439:108898. [PMID: 37890241 PMCID: PMC10756798 DOI: 10.1016/j.heares.2023.108898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 09/12/2023] [Accepted: 10/19/2023] [Indexed: 10/29/2023]
Abstract
Loss of function of stereocilin (STRC) is the second most common cause of inherited hearing loss. The loss of the stereocilin protein, encoded by the STRC gene, induces the loss of connection between outer hair cells and tectorial membrane. This only affects the outer hair cells (OHCs) function, involving deficits of active cochlear frequency selectivity and amplifier functions despite preservation of normal inner hair cells. Better understanding of cochlear features associated with mutation of STRC will improve our knowledge of normal cochlear function, the pathophysiology of hearing impairment, and potentially enhance hearing aid and cochlear implant signal processing. Nine subjects with homozygous or compound heterozygous loss of function mutations in STRC were included, age 7-24 years. Temporal and spectral modulation perception were measured, characterized by spectral and temporal modulation transfer functions. Speech-in-noise perception was studied with spondee identification in adaptive steady-state noise and AzBio sentences with 0 and -5 dB SNR multitalker babble. Results were compared with normal hearing (NH) and cochlear implant (CI) listeners to place STRC-/- listeners' hearing capacity in context. Spectral ripple discrimination thresholds in the STRC-/- subjects were poorer than in NH listeners (p < 0.0001) but remained better than for CI listeners (p < 0.0001). Frequency resolution appeared impaired in the STRC-/- group compared to NH listeners but did not reach statistical significance (p = 0.06). Compared to NH listeners, amplitude modulation detection thresholds in the STRC-/- group did not reach significance (p= 0.06) but were better than in CI subjects (p < 0.0001). Temporal resolution in STRC-/- subjects was similar to NH (p = 0.98) but better than in CI listeners (p = 0.04). The spondee reception threshold in the STRC-/- group was worse than NH listeners (p = 0.0008) but better than CI listeners (p = 0.0001). For AzBio sentences, performance at 0 dB SNR was similar between the STRC-/- group and the NH group, 88 % and 97 % respectively. For -5 dB SNR, the STRC-/- performance was significantly poorer than NH, 40 % and 85 % respectively, yet much better than with CI who performed at 54 % at +5 dB SNR in children and 53 % at + 10 dB SNR in adults. To our knowledge, this is the first study of the psychoacoustic performance of human subjects lacking cochlear amplification but with normal inner hair cell function. Our data demonstrate preservation of temporal resolution and a trend to impaired frequency resolution in this group without reaching statistical significance. Speech-in-noise perception compared to NH listeners was impaired as well. All measures were better than those in CI listeners. It remains to be seen if hearing aid modifications, customized for the spectral deficits in STRC-/- listeners can improve speech understanding in noise. Since cochlear implants are also limited by deficient spectral selectivity, STRC-/- hearing may provide an upper bound on what could be obtained with better temporal coding in electrical stimulation.
Collapse
Affiliation(s)
- Charlotte Benoit
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA.
| | - Ryan J Carlson
- Departments of Genome Sciences and Medicine, University of Washington, Seattle, WA, USA
| | - Mary-Claire King
- Departments of Genome Sciences and Medicine, University of Washington, Seattle, WA, USA
| | - David L Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA; Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA; Division of Pediatric Otolaryngology, Department of Surgery, Seattle Children's Hospital, Seattle, WA, USA
| | - Jay T Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, USA; Department of Bioengineering, University of Washington, Seattle, WA, USA
| |
Collapse
|
5
|
Berger JI, Gander PE, Kim S, Schwalje AT, Woo J, Na YM, Holmes A, Hong JM, Dunn CC, Hansen MR, Gantz BJ, McMurray B, Griffiths TD, Choi I. Neural Correlates of Individual Differences in Speech-in-Noise Performance in a Large Cohort of Cochlear Implant Users. Ear Hear 2023; 44:1107-1120. [PMID: 37144890 PMCID: PMC10426791 DOI: 10.1097/aud.0000000000001357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 01/11/2023] [Indexed: 05/06/2023]
Abstract
OBJECTIVES Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.
Collapse
Affiliation(s)
- Joel I. Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Phillip E. Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Adam T. Schwalje
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Young-min Na
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Ann Holmes
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, USA
| | - Jean M. Hong
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Camille C. Dunn
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Marlan R. Hansen
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bruce J. Gantz
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Timothy D. Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Inyong Choi
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
6
|
Tao DD, Shi B, Galvin JJ, Liu JS, Fu QJ. Frequency detection, frequency discrimination, and spectro-temporal pattern perception in older and younger typically hearing adults. Heliyon 2023; 9:e18922. [PMID: 37583764 PMCID: PMC10424075 DOI: 10.1016/j.heliyon.2023.e18922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 07/14/2023] [Accepted: 08/02/2023] [Indexed: 08/17/2023] Open
Abstract
Elderly adults often experience difficulties in speech understanding, possibly due to age-related deficits in frequency perception. It is unclear whether age-related deficits in frequency perception differ between the apical or basal regions of the cochlea. It is also unclear how aging might differently affect frequency discrimination or detection of a change in frequency within a stimulus. In the present study, pure-tone frequency thresholds were measured in 19 older (61-74 years) and 20 younger (22-28 years) typically hearing adults. Participants were asked to discriminate between reference and probe frequencies or to detect changes in frequency within a probe stimulus. Broadband spectro-temporal pattern perception was also measured using the spectro-temporal modulated ripple test (SMRT). Frequency thresholds were significantly poorer in the basal than in the apical region of the cochlea; the deficit in the basal region was 2 times larger for the older than for the younger group. Frequency thresholds were significantly poorer in the older group, especially in the basal region where frequency detection thresholds were 3.9 times poorer for the older than for the younger group. SMRT thresholds were 1.5 times better for the younger than for the older group. Significant age effects were observed for SMRT thresholds and for frequency thresholds only in the basal region. SMRT thresholds were significantly correlated with frequency thresholds only in the older group. The poorer frequency and spectro-temporal pattern perception may contribute to age-related deficits in speech perception, even when audiometric thresholds are nearly normal.
Collapse
Affiliation(s)
- Duo-Duo Tao
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Bin Shi
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - John J. Galvin
- House Institute Foundation, Los Angeles, CA, 90057, USA
- University Hospital Center of Tours, Tours, 37000, France
| | - Ji-Sheng Liu
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, 215006, China
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA, 90095, USA
| |
Collapse
|
7
|
Huang Z, Chen S, Zhang G, Almadhor A, Li R, Li M, Abbas M, Nguyen Le B, Zhang J, Huang Y. Nanocatalysts as fast and powerful medical intervention: Bridging cochlear implant therapies and advanced modelling using Hidden Markov Models (HMMs) for effective treatment of infections. ENVIRONMENTAL RESEARCH 2023:116285. [PMID: 37301496 DOI: 10.1016/j.envres.2023.116285] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/04/2023] [Revised: 05/23/2023] [Accepted: 05/29/2023] [Indexed: 06/12/2023]
Abstract
As human population growth and waste from technologically advanced industries threaten to destabilise our delicate ecological equilibrium, the global spotlight intensifies on environmental contamination and climate-related changes. These challenges extend beyond our external environment and have significant effects on our internal ecosystems. The inner ear, which is responsible for balance and auditory perception, is a prime example. When these sensory mechanisms are impaired, disorders such as deafness can develop. Traditional treatment methods, including systemic antibiotics, are frequently ineffective due to inadequate inner ear penetration. Conventional techniques for administering substances to the inner ear fail to obtain adequate concentrations as well. In this context, cochlear implants laden with nanocatalysts emerge as a promising strategy for the targeted treatment of inner ear infections. Coated with biocompatible nanoparticles containing specific nanocatalysts, these implants can degrade or neutralise contaminants linked to inner ear infections. This method enables the controlled release of nanocatalysts directly at the infection site, thereby maximising therapeutic efficacy and minimising adverse effects. In vivo and in vitro studies have demonstrated that these implants are effective at eliminating infections, reducing inflammation, and fostering tissue regeneration in the ear. This study investigates the application of hidden Markov models (HMMs) to nanocatalyst-loaded cochlear implants. The HMM is trained on surgical phases in order to accurately identify the various phases associated with implant utilisation. This facilitates the precision placement of surgical instruments within the ear, with a location accuracy between 91% and 95% and a standard deviation between 1% and 5% for both sites. In conclusion, nanocatalysts serve as potent medicinal instruments, bridging cochlear implant therapies and advanced modelling utilising hidden Markov models for the effective treatment of inner ear infections. Cochlear implants loaded with nanocatalysts offer a promising method to combat inner ear infections and enhance patient outcomes by addressing the limitations of conventional treatments.
Collapse
|
8
|
Deroche MLD, Wolfe J, Neumann S, Manning J, Towler W, Alemi R, Bien AG, Koirala N, Hanna L, Henry L, Gracco VL. Auditory evoked response to an oddball paradigm in children wearing cochlear implants. Clin Neurophysiol 2023; 149:133-145. [PMID: 36965466 DOI: 10.1016/j.clinph.2023.02.179] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2022] [Revised: 02/24/2023] [Accepted: 02/28/2023] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada.
| | - Jace Wolfe
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Sara Neumann
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Jacy Manning
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - William Towler
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Razieh Alemi
- Department of Psychology, Concordia University, 7141 Sherbrooke St. West, Montreal, Quebec H4B 1R6, Canada
| | - Alexander G Bien
- University of Oklahoma College of Medicine, Otolaryngology, 800 Stanton L Young Blvd., Oklahoma City, OK 73117, USA
| | - Nabin Koirala
- Haskins Laboratories, 300 George St., New Haven, CT 06511, USA
| | - Lindsay Hanna
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | - Lauren Henry
- Hearts for Hearing Foundation, 11500 Portland Av., Oklahoma City, OK 73120, USA
| | | |
Collapse
|
9
|
Noble AR, Resnick J, Broncheau M, Klotz S, Rubinstein JT, Werner LA, Horn DL. Spectrotemporal Modulation Discrimination in Infants With Normal Hearing. Ear Hear 2023; 44:109-117. [PMID: 36218270 PMCID: PMC9780152 DOI: 10.1097/aud.0000000000001277] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVES Spectral resolution correlates with speech understanding in post-lingually deafened adults with cochlear implants (CIs) and is proposed as a non-linguistic measure of device efficacy in implanted infants. However, spectral resolution develops gradually through adolescence regardless of hearing status. Spectral resolution relies on two different factors that mature at markedly different rates: Resolution of ripple peaks (frequency resolution) matures during infancy whereas sensitivity to across-spectrum intensity modulation (spectral modulation sensitivity) matures by age 12. Investigation of spectral resolution as a clinical measure for implanted infants requires understanding how each factor develops and constrains speech understanding with a CI. This study addresses the limitations of the present literature. First, the paucity of relevant data requires replication and generalization across measures of spectral resolution. Second, criticism that previously used measures of spectral resolution may reflect non-spectral cues needs to be addressed. Third, rigorous behavioral measurement of spectral resolution in individual infants is limited by attrition. To address these limitations, we measured discrimination of spectrally modulated, or rippled, sounds at two modulation depths in normal hearing (NH) infants and adults. Non-spectral cues were limited by constructing stimuli with spectral envelopes that change in phase across time. Pilot testing suggested that dynamic spectral envelope stimuli appeared to hold infants' attention and lengthen habituation time relative to previously used static ripple stimuli. A post-hoc condition was added to ensure that the stimulus noise carrier was not obscuring age differences in spectral resolution. The degree of improvement in discrimination at higher ripple depth represents spectral frequency resolution independent of the overall threshold. It was hypothesized that adults would have better thresholds than infants but both groups would show similar effects of modulation depth. DESIGN Participants were 53 6- to 7-month-old infants and 23 adults with NH with no risk factors for hearing loss who passed bilateral otoacoustic emissions screening. Stimuli were created from complexes with 33- or 100-tones per octave, amplitude-modulated across frequency and time with constant 5 Hz envelope phase-drift and spectral ripple density from 1 to 20 ripples per octave (RPO). An observer-based, single-interval procedure measured the highest RPO (1 to 19) a listener could discriminate from a 20 RPO stimulus. Age-group and stimulus pure-tone complex were between-subjects variables whereas modulation depth (10 or 20 dB) was within-subjects. Linear-mixed model analysis was used to test for the significance of the main effects and interactions. RESULTS All adults and 94% of infants provided ripple density thresholds at both modulation depths. The upper range of threshold approached 17 RPO with the 100-tones/octave carrier and 20 dB depth condition. As expected, mean threshold was significantly better with the 100-tones/octave compared with the 33-tones/octave complex, better in adults than in infants, and better at 20 dB than 10 dB modulation depth. None of the interactions reached significance, suggesting that the effect of modulation depth on the threshold was not different for infants or adults. CONCLUSIONS Spectral ripple discrimination can be measured in infants with minimal listener attrition using dynamic ripple stimuli. Results are consistent with previous findings that spectral resolution is immature in infancy due to immature spectral modulation sensitivity rather than frequency resolution.
Collapse
Affiliation(s)
- Anisha R. Noble
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
| | - Jesse Resnick
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
| | - Mariette Broncheau
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
| | - Stephanie Klotz
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA
| | - Jay T. Rubinstein
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
| | - Lynne A. Werner
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA
| | - David L. Horn
- Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle, WA
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA
| |
Collapse
|
10
|
Anderson SR, Kan A, Litovsky RY. Asymmetric temporal envelope sensitivity: Within- and across-ear envelope comparisons in listeners with bilateral cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:3294. [PMID: 36586876 PMCID: PMC9731674 DOI: 10.1121/10.0016365] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/17/2022] [Revised: 11/14/2022] [Accepted: 11/16/2022] [Indexed: 06/17/2023]
Abstract
For listeners with bilateral cochlear implants (BiCIs), patient-specific differences in the interface between cochlear implant (CI) electrodes and the auditory nerve can lead to degraded temporal envelope information, compromising the ability to distinguish between targets of interest and background noise. It is unclear how comparisons of degraded temporal envelope information across spectral channels (i.e., electrodes) affect the ability to detect differences in the temporal envelope, specifically amplitude modulation (AM) rate. In this study, two pulse trains were presented simultaneously via pairs of electrodes in different places of stimulation, within and/or across ears, with identical or differing AM rates. Results from 11 adults with BiCIs indicated that sensitivity to differences in AM rate was greatest when stimuli were paired between different places of stimulation in the same ear. Sensitivity from pairs of electrodes was predicted by the poorer electrode in the pair or the difference in fidelity between both electrodes in the pair. These findings suggest that electrodes yielding poorer temporal fidelity act as a bottleneck to comparisons of temporal information across frequency and ears, limiting access to the cues used to segregate sounds, which has important implications for device programming and optimizing patient outcomes with CIs.
Collapse
Affiliation(s)
- Sean R Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Alan Kan
- School of Engineering, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
11
|
Nassar AAM, Bassiouny S, Abdel Rahman TT, Hanafy KM. Assessment of outcome measures after audiological computer-based auditory training in cochlear implant children. Int J Pediatr Otorhinolaryngol 2022; 160:111217. [PMID: 35816970 DOI: 10.1016/j.ijporl.2022.111217] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/12/2022] [Accepted: 06/22/2022] [Indexed: 10/17/2022]
Abstract
OBJECTIVE To validate the clinical use of acoustic change complex (ACC) as an objective outcome measure of auditory training in Egyptian cochlear implant (CI) children and explore how far electrophysiological measures could be correlated to behavioral measures in assessing training outcome. Also to explore the efficacy of computer-based auditory training programs (CBATP) in the rehabilitation process of CI children. METHODS Sixty Arabic speaking children participated in the present study. Forty children using monaural CI device served as study group (20 children in subgroup A and 20 children in subgroup B). Both subgroups received traditional speech therapy sessions, additionally subgroup (A) children received computer-based auditory training program (CBATP) at home for three months. Their age ranged from 8 to 17 years. 20 age and sex-matched normal hearing children served as control group as a standardization for the stimuli used to elicit ACC. The study group children were subjected to detailed history taking, parent reported questionnaire (MAIS, Arabic version), aided sound field evaluation, psychophysical evaluation using auditory fusion test (AFT), speech perception testing according to language age, ACC in response to gaps in 1000 Hz tones and language evaluation. This work-up was repeated after 3&6 months for both study subgroups. RESULTS Children of study subgroup (A) showed improvement of auditory fusion test (AFT) thresholds at 3 & 6 months post-training follow up. As regards acoustic change complex (ACC), it can be detected in 85% of subgroup (A) children, 85% of subgroup (B) children and 100% of control group children. Lower ACC gap detection thresholds were obtained only after 3 months in subgroup (A), while after 6 months in subgroup (B). There were statistically significant differences between initial assessment and 3 & 6 months follow up as regards ACC P1 and N2 latencies and amplitudes in both study subgroups, however in subgroup (A), ACC P1 amplitude at 6 months post-training was significantly larger than values of 3 months follow up. There was highly significant correlation between thresholds of AFT and ACC gap detection threshold. CONCLUSIONS ACC can be used as a reliable tool for evaluating auditory training outcome in CI children. ACC gap detection threshold can predict psychophysical temporal resolution after auditory training in difficult to test population. CBATP is an easy and accessible method which may be effective in improving CI outcome.
Collapse
Affiliation(s)
| | - Samia Bassiouny
- ORL Dept, Faculty of Medicine, Ain Shams University, Abassia Street, Cairo, Egypt
| | | | - Karim Mohamed Hanafy
- ORL Dept, Faculty of Medicine, Ain Shams University, Abassia Street, Cairo, Egypt
| |
Collapse
|
12
|
Supin AY, Milekhina ON, Nechaev DI, Tomozova MS. Ripple density resolution dependence on ripple width. PLoS One 2022; 17:e0270296. [PMID: 35867679 PMCID: PMC9307196 DOI: 10.1371/journal.pone.0270296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2022] [Accepted: 06/08/2022] [Indexed: 11/18/2022] Open
Abstract
The goal of the study was to investigate how variations in ripple width influence the ripple density resolution. The influence of the ripple width was investigated with two experimental paradigms: (i) discrimination between a rippled test signal and a rippled reference signal with opposite ripple phases and (ii) discrimination between a rippled test signal and a flat reference signal. The ripple density resolution depended on the ripple width: the narrower the width, the higher the resolution. For distinguishing between two rippled signals, the resolution varied from 15.1 ripples/oct at a ripple width of 9% of the ripple frequency spacing to 8.1 ripples/oct at 64%. For distinguishing between a rippled test signal and a non-rippled reference signal, the resolution varied from 85 ripples/oct at a ripple width of 9% to 9.3 ripples/oct at a ripple width of 64%. For distinguishing between two rippled signals, the result can be explained by the increased ripple depth in the excitation pattern due to the widening of the inter-ripple gaps. For distinguishing between a rippled test signal and a non-rippled reference signal, the result can be explained by the increased ratio between the autocorrelated and uncorrelated components of the input signal.
Collapse
Affiliation(s)
- Alexander Ya. Supin
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
- * E-mail:
| | - Olga N. Milekhina
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| | - Dmitry I. Nechaev
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| | - Marina S. Tomozova
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
13
|
Xie D, Luo J, Chao X, Li J, Liu X, Fan Z, Wang H, Xu L. Relationship Between the Ability to Detect Frequency Changes or Temporal Gaps and Speech Perception Performance in Post-lingual Cochlear Implant Users. Front Neurosci 2022; 16:904724. [PMID: 35757528 PMCID: PMC9213807 DOI: 10.3389/fnins.2022.904724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 05/17/2022] [Indexed: 12/03/2022] Open
Abstract
Previous studies, using modulation stimuli, on the relative effects of frequency resolution and time resolution on CI users’ speech perception failed to reach a consistent conclusion. In this study, frequency change detection and temporal gap detection were used to investigate the frequency resolution and time resolution of CI users, respectively. Psychophysical and neurophysiological methods were used to simultaneously investigate the effects of frequency and time resolution on speech perception in post-lingual cochlear implant (CI) users. We investigated the effects of psychophysical results [frequency change detection threshold (FCDT), gap detection threshold (GDT)], and acoustic change complex (ACC) responses (evoked threshold, latency, or amplitude of ACC induced by frequency change or temporal gap) on speech perception [recognition rate of monosyllabic words, disyllabic words, sentences in quiet, and sentence recognition threshold (SRT) in noise]. Thirty-one adult post-lingual CI users of Mandarin Chinese were enrolled in the study. The stimuli used to induce ACCs to frequency changes were 800-ms pure tones (fundamental frequency was 1,000 Hz); the frequency change occurred at the midpoint of the tones, with six percentages of frequency changes (0, 2, 5, 10, 20, and 50%). Temporal silences with different durations (0, 5, 10, 20, 50, and 100 ms) were inserted in the middle of the 800-ms white noise to induce ACCs evoked by temporal gaps. The FCDT and GDT were obtained by two 2-alternative forced-choice procedures. The results showed no significant correlation between the CI hearing threshold and speech perception in the study participants. In the multiple regression analysis of the influence of simultaneous psychophysical measures and ACC responses on speech perception, GDT significantly predicted every speech perception index, and the ACC amplitude evoked by the temporal gap significantly predicted the recognition of disyllabic words in quiet and SRT in noise. We conclude that when the ability to detect frequency changes and the temporal gap is considered simultaneously, the ability to detect frequency changes may have no significant effect on speech perception, but the ability to detect temporal gaps could significantly predict speech perception.
Collapse
Affiliation(s)
- Dianzhao Xie
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jianfen Luo
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xiuhua Chao
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jinming Li
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xianqi Liu
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Zhaomin Fan
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Haibo Wang
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Lei Xu
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| |
Collapse
|
14
|
Winn MB, O’Brien G. Distortion of Spectral Ripples Through Cochlear Implants Has Major Implications for Interpreting Performance Scores. Ear Hear 2022; 43:764-772. [PMID: 34966157 PMCID: PMC9010354 DOI: 10.1097/aud.0000000000001162] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
The spectral ripple discrimination task is a psychophysical measure that has been found to correlate with speech recognition in listeners with cochlear implants (CIs). However, at ripple densities above a critical value (around 2 RPO, but device-specific), the sparse spectral sampling of CI processors results in stimulus distortions resulting in aliasing and unintended changes in modulation depth. As a result, spectral ripple thresholds above a certain number are not ordered monotonically along the RPO dimension and thus cannot be considered better or worse spectral resolution than each other, thus undermining correlation measurements. These stimulus distortions are not remediated by changing stimulus phase, indicating these issues cannot be solved by spectrotemporally modulated stimuli. Speech generally has very low-density spectral modulations, leading to questions about the mechanism of correlation between high ripple thresholds and speech recognition. Existing data showing correlations between ripple discrimination and speech recognition include many observations above the aliasing limit. These scores should be treated with caution, and experimenters could benefit by prospectively considering the limitations of the spectral ripple test.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, USA
| | | |
Collapse
|
15
|
Jahn KN, Arenberg JG, Horn DL. Spectral Resolution Development in Children With Normal Hearing and With Cochlear Implants: A Review of Behavioral Studies. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:1646-1658. [PMID: 35201848 PMCID: PMC9499384 DOI: 10.1044/2021_jslhr-21-00307] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Revised: 09/09/2021] [Accepted: 12/01/2021] [Indexed: 06/14/2023]
Abstract
PURPOSE This review article provides a theoretical overview of the development of spectral resolution in children with normal hearing (cNH) and in those who use cochlear implants (CIs), with an emphasis on methodological considerations. The aim was to identify key directions for future research on spectral resolution development in children with CIs. METHOD A comprehensive literature review was conducted to summarize and synthesize previously published behavioral research on spectral resolution development in normal and impaired auditory systems. CONCLUSIONS In cNH, performance on spectral resolution tasks continues to improve through the teenage years and is likely driven by gradual maturation of across-channel intensity resolution. A small but growing body of evidence from children with CIs suggests a more complex relationship between spectral resolution development, patient demographics, and the quality of the CI electrode-neuron interface. Future research should aim to distinguish between the effects of patient-specific variables and the underlying physiology on spectral resolution abilities in children of all ages who are hard of hearing and use auditory prostheses.
Collapse
Affiliation(s)
- Kelly N. Jahn
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
- Callier Center for Communication Disorders, The University of Texas at Dallas
| | - Julie G. Arenberg
- Department of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston
| | - David L. Horn
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology – Head and Neck Surgery, University of Washington, Seattle
- Division of Otolaryngology, Seattle Children's Hospital, WA
| |
Collapse
|
16
|
Davidson LS, Geers AE, Uchanski RM. Spectral Modulation Detection Performance and Speech Perception in Pediatric Cochlear Implant Recipients. Am J Audiol 2021; 30:1076-1087. [PMID: 34670098 PMCID: PMC9126113 DOI: 10.1044/2021_aja-21-00076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2021] [Revised: 07/13/2021] [Accepted: 07/19/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The aims of this study were, for pediatric cochlear implant (CI) recipients, (a) to determine the effect of age on their spectral modulation detection (SMD) ability and compare their age effect to that of their typically hearing (TH) peers; (b) to identify demographic, cognitive, and audiological factors associated with SMD ability; and (c) to determine the unique contribution of SMD ability to segmental and suprasegmental speech perception performance. METHOD A total of 104 pediatric CI recipients and 38 TH peers (ages 6-11 years) completed a test of SMD. CI recipients completed tests of segmental (e.g., word recognition in noise and vowels and consonants in quiet) and suprasegmental (e.g., talker discrimination, stress discrimination, and emotion identification) perception, nonverbal intelligence, and working memory. Regressions analyses were used to examine the effects of group and age on percent-correct SMD scores. For the CI group, the effects of demographic, audiological, and cognitive variables on SMD performance and the effects of SMD on speech perception were examined. RESULTS The TH group performed significantly better than the CI group on SMD. Both groups showed better performance with increasing age. Significant predictors of SMD performance for the CI group were age and nonverbal intelligence. SMD performance predicted significant variance in segmental and suprasegmental perception. The variance predicted by SMD performance was nearly double for suprasegmental than for segmental perception. CONCLUSIONS Children in the CI group, on average, scored lower than their TH peers. The slopes of improvement in SMD with age did not differ between the groups. The significant effect of nonverbal intelligence on SMD performance in CI recipients indicates that difficulties inherent in the task affect outcomes. SMD ability predicted speech perception scores, with a more prominent role in suprasegmental than in segmental speech perception. SMD ability may provide a useful nonlinguistic tool for predicting speech perception benefit, with cautious interpretation based on age and cognitive function.
Collapse
Affiliation(s)
- Lisa S. Davidson
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| | - Ann E. Geers
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson
| | - Rosalie M. Uchanski
- Department of Otolaryngology, Washington University School of Medicine in St. Louis, MO
| |
Collapse
|
17
|
Horn D, Walter M, Rubinstein J, Lau BK. Electrophysiological responses to spectral ripple envelope phase inversion in typical hearing 2- to 4-month-olds. PROCEEDINGS OF MEETINGS ON ACOUSTICS. ACOUSTICAL SOCIETY OF AMERICA 2021; 45:050003. [PMID: 35891886 PMCID: PMC9311477 DOI: 10.1121/2.0001558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- David Horn
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Max Walter
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Jay Rubinstein
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Bonnie K. Lau
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| |
Collapse
|
18
|
Nittrouer S, Lowenstein JH, Sinex DG. The contribution of spectral processing to the acquisition of phonological sensitivity by adolescent cochlear implant users and normal-hearing controls. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2116. [PMID: 34598601 PMCID: PMC8463097 DOI: 10.1121/10.0006416] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Revised: 08/27/2021] [Accepted: 09/01/2021] [Indexed: 05/31/2023]
Abstract
This study tested the hypotheses that (1) adolescents with cochlear implants (CIs) experience impaired spectral processing abilities, and (2) those impaired spectral processing abilities constrain acquisition of skills based on sensitivity to phonological structure but not those based on lexical or syntactic (lexicosyntactic) knowledge. To test these hypotheses, spectral modulation detection (SMD) thresholds were measured for 14-year-olds with normal hearing (NH) or CIs. Three measures each of phonological and lexicosyntactic skills were obtained and used to generate latent scores of each kind of skill. Relationships between SMD thresholds and both latent scores were assessed. Mean SMD threshold was poorer for adolescents with CIs than for adolescents with NH. Both latent lexicosyntactic and phonological scores were poorer for the adolescents with CIs, but the latent phonological score was disproportionately so. SMD thresholds were significantly associated with phonological but not lexicosyntactic skill for both groups. The only audiologic factor that also correlated with phonological latent scores for adolescents with CIs was the aided threshold, but it did not explain the observed relationship between SMD thresholds and phonological latent scores. Continued research is required to find ways of enhancing spectral processing for children with CIs to support their acquisition of phonological sensitivity.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Joanna H Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| | - Donal G Sinex
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville, Florida 32610, USA
| |
Collapse
|
19
|
Bosen AK, Sevich VA, Cannon SA. Forward Digit Span and Word Familiarity Do Not Correlate With Differences in Speech Recognition in Individuals With Cochlear Implants After Accounting for Auditory Resolution. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3330-3342. [PMID: 34251908 PMCID: PMC8740688 DOI: 10.1044/2021_jslhr-20-00574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/25/2020] [Revised: 01/12/2021] [Accepted: 04/09/2021] [Indexed: 06/07/2023]
Abstract
Purpose In individuals with cochlear implants, speech recognition is not associated with tests of working memory that primarily reflect storage, such as forward digit span. In contrast, our previous work found that vocoded speech recognition in individuals with normal hearing was correlated with performance on a forward digit span task. A possible explanation for this difference across groups is that variability in auditory resolution across individuals with cochlear implants could conceal the true relationship between speech and memory tasks. Here, our goal was to determine if performance on forward digit span and speech recognition tasks are correlated in individuals with cochlear implants after controlling for individual differences in auditory resolution. Method We measured sentence recognition ability in 20 individuals with cochlear implants with Perceptually Robust English Sentence Test Open-set sentences. Spectral and temporal modulation detection tasks were used to assess individual differences in auditory resolution, auditory forward digit span was used to assess working memory storage, and self-reported word familiarity was used to assess vocabulary. Results Individual differences in speech recognition were predicted by spectral and temporal resolution. A correlation was found between forward digit span and speech recognition, but this correlation was not significant after controlling for spectral and temporal resolution. No relationship was found between word familiarity and speech recognition. Forward digit span performance was not associated with individual differences in auditory resolution. Conclusions Our findings support the idea that sentence recognition in individuals with cochlear implants is primarily limited by individual differences in working memory processing, not storage. Studies examining the relationship between speech and memory should control for individual differences in auditory resolution.
Collapse
Affiliation(s)
| | - Victoria A. Sevich
- Boys Town National Research Hospital, Omaha, NE
- The Ohio State University, Columbus
| | | |
Collapse
|
20
|
Archer-Boyd AW, Goehring T, Carlyon RP. The Effect of Free-Field Presentation and Processing Strategy on a Measure of Spectro-Temporal Processing by Cochlear-Implant Listeners. Trends Hear 2021; 24:2331216520964281. [PMID: 33305696 PMCID: PMC7734493 DOI: 10.1177/2331216520964281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
The STRIPES (Spectro-Temporal Ripple for Investigating Processor EffectivenesS) test is a psychophysical test of spectro-temporal resolution developed for cochlear-implant (CI) listeners. Previously, the test has been strictly controlled to minimize the introduction of extraneous, nonspectro-temporal cues. Here, the effect of relaxing many of those controls was investigated to ascertain the generalizability of the STRIPES test. Preemphasis compensation was removed from the STRIPES stimuli, the test was presented over a loudspeaker at a level similar to conversational speech and above the automatic gain control threshold of the CI processor, and listeners were tested using the everyday setting of their clinical devices. There was no significant difference in STRIPES thresholds measured across conditions for the 10 CI listeners tested. One listener obtained higher (better) thresholds when listening with their clinical processor. An analysis of longitudinal results showed excellent test–retest reliability of STRIPES over multiple listening sessions with similar conditions. Overall, the results show that the STRIPES test is robust to extraneous cues, and that thresholds are reliable over time. It is sufficiently robust for use with different processing strategies, free-field presentation, and in nonresearch settings.
Collapse
Affiliation(s)
- Alan W Archer-Boyd
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, 2152University of Cambridge, Cambridge, United Kingdom
| | - Tobias Goehring
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, 2152University of Cambridge, Cambridge, United Kingdom
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, 2152University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
21
|
Abstract
Sequences of phonologically similar words are more difficult to remember than phonologically distinct sequences. This study investigated whether this difficulty arises in the acoustic similarity of auditory stimuli or in the corresponding phonological labels in memory. Participants reconstructed sequences of words which were degraded with a vocoder. We manipulated the phonological similarity of response options across two groups. One group was trained to map stimulus words onto phonologically similar response labels which matched the recorded word; the other group was trained to map words onto a set of plausible responses which were mismatched from the original recordings but were selected to have less phonological overlap. Participants trained on the matched responses were able to learn responses with less training and recall sequences more accurately than participants trained on the mismatched responses, even though the mismatched responses were more phonologically distinct from one another and participants were unaware of the mismatch. The relative difficulty of recalling items in the correct position was the same across both sets of response labels. Mismatched responses impaired recall accuracy across all positions except the final item in each list. These results are consistent with the idea that increased difficulty of mapping acoustic stimuli onto phonological forms impairs serial recall. Increased mapping difficulty could impair retention of memoranda and impede consolidation into phonological forms, which would impair recall in adverse listening conditions.
Collapse
Affiliation(s)
- Adam K Bosen
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Elizabeth Monzingo
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| | - Angela M AuBuchon
- Hearing and Speech Perception, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
22
|
O'Neill ER, Parke MN, Kreft HA, Oxenham AJ. Role of semantic context and talker variability in speech perception of cochlear-implant users and normal-hearing listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1224. [PMID: 33639827 PMCID: PMC7895533 DOI: 10.1121/10.0003532] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 01/01/2021] [Accepted: 01/26/2021] [Indexed: 06/12/2023]
Abstract
This study assessed the impact of semantic context and talker variability on speech perception by cochlear-implant (CI) users and compared their overall performance and between-subjects variance with that of normal-hearing (NH) listeners under vocoded conditions. Thirty post-lingually deafened adult CI users were tested, along with 30 age-matched and 30 younger NH listeners, on sentences with and without semantic context, presented in quiet and noise, spoken by four different talkers. Additional measures included working memory, non-verbal intelligence, and spectral-ripple detection and discrimination. Semantic context and between-talker differences influenced speech perception to similar degrees for both CI users and NH listeners. Between-subjects variance for speech perception was greatest in the CI group but remained substantial in both NH groups, despite the uniformly degraded stimuli in these two groups. Spectral-ripple detection and discrimination thresholds in CI users were significantly correlated with speech perception, but a single set of vocoder parameters for NH listeners was not able to capture average CI performance in both speech and spectral-ripple tasks. The lack of difference in the use of semantic context between CI users and NH listeners suggests no overall differences in listening strategy between the groups, when the stimuli are similarly degraded.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Morgan N Parke
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
23
|
Goykhburg MV, Nechaev DI, Bakhshinyan VV, Tavartkiladze GA. [Evaluation of the cochlear implantation users rehabilitation results using psychoacoustic methods]. Vestn Otorinolaringol 2021; 86:10-16. [PMID: 34964322 DOI: 10.17116/otorino20218606110] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
UNLABELLED Currently, the number of patients with bilateral sensorineural deafness treated with cochlear implantation (CI) is increasing in the Russian Federation. In this regard, methods of assessing the auditory rehabilitation of this category of patients become more relevant. OBJECTIVE To investigate the correlation of the speech intelligibility in quiet with frequency resolving power (FRP) of hearing using a ripple-spectrum phase reversion test (RSPRT) in CI users. MATERIAL AND METHODS The study includes 30 CI users, three of them after bilateral CI, aged from 13 to 63 years with CI usage experience from 1 year to 16 years. 19 patients used CI systems manufactured by Cochlear Ltd. (Australia), 11 patients used CI systems manufactured by Advanced Bionics (Switzerland). All subjects underwent a number of studies including pure tone audiometry (TPA), speech audiometry in quiet using a multi-syllable speech material on a two-channel clinical audiometer AC-40 (Interacoustics A/S, Denmark); PC with recorded phonetic material from which the signal was reproduced, acoustic speaker SP90 (Interacoustics A/S, Denmark), for FRP estimation - RSPRT test in a free sound field, which was installed on the PC and also reproduced through SP 90 speakers (Interacoustics A/S, Denmark) were used. RESULTS According to TPA results in a free sound field, the sound perception thresholds in all subjects corresponded to the mild degree sensorineural hearing loss. The sound perception threshold in the free sound field in the range from 500 Hz to 4 kHz was within the range of 25-30 dB nHL. The percentage of speech intelligibility in quiet in the free sound field ranged from 5 to 100%. During the FRP study of patients using RSPRT test, the following results were obtained: the average value of RSPRT test results at the frequency of 1 kHz was 1.94 RPO; for 2 kHz - 2.3 RPO; for 4 kHz - 2.2. The significant correlation between the speech intelligibility in quiet and frequency resolution of hearing was obtained at 1 and 4 kHz. The highest correlation coefficient was detected at 1 kHz - r=0.57 (p=0.0005), while at 4 kHz it was lower - r=0.46 (p=0.009), and at 2 kHz - at the boundary of the significance: r=0.34 (p=0.051). CONCLUSIONS As a result of the study, it was found that there is a correlation between speech intelligibility in quiet and FRP of hearing, which makes it possible to recommend the use of RSPRT in assessing the auditory rehabilitation of patients after CI.
Collapse
Affiliation(s)
- M V Goykhburg
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
| | - D I Nechaev
- Severtsov Institute of Ecology and Evolution of the Russian Academy of Sciences, Moscow, Russia
| | - V V Bakhshinyan
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
- Russian Medical Academy for Continuous Professional Education, Moscow, Russia
| | - G A Tavartkiladze
- Russian Scientific and Clinical Center for Audiology and Hearing Prosthetics of the Federal Medical and Biological Agency, Moscow, Russia
- Russian Medical Academy for Continuous Professional Education, Moscow, Russia
| |
Collapse
|
24
|
Nechaev DI, Milekhina ON, Tomozova MS, Supin AY. High Ripple-Density Resolution for Discriminating Between Rippled and Nonrippled Signals: Effect of Temporal Processing or Combination Products? Trends Hear 2021; 25:23312165211010163. [PMID: 33926309 PMCID: PMC8111533 DOI: 10.1177/23312165211010163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 03/19/2021] [Accepted: 03/24/2021] [Indexed: 11/17/2022] Open
Abstract
The goal of the study was to investigate the role of combination products in the higher ripple-density resolution estimates obtained by discrimination between a spectrally rippled and a nonrippled noise signal than that obtained by discrimination between two rippled signals. To attain this goal, a noise band was used to mask the frequency band of expected low-frequency combination products. A three-alternative forced-choice procedure with adaptive ripple-density variation was used. The mean background (unmasked) ripple-density resolution was 9.8 ripples/oct for rippled reference signals and 21.8 ripples/oct for nonrippled reference signals. Low-frequency maskers reduced the ripple-density resolution. For masker levels from -10 to 10 dB re. signal, the ripple-density resolution for nonrippled reference signals was approximately twice as high as that for rippled reference signals. At a masker level as high as 20 dB re. signal, the ripple-density resolution decreased in both discrimination tasks. This result leads to the conclusion that low-frequency combination products are not responsible for the task-dependent difference in ripple-density resolution estimates.
Collapse
Affiliation(s)
- Dmitry I. Nechaev
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Olga N. Milekhina
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Marina S. Tomozova
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| | - Alexander Y. Supin
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russian Federation
| |
Collapse
|
25
|
Zhou N, Dixon S, Zhu Z, Dong L, Weiner M. Spectrotemporal Modulation Sensitivity in Cochlear-Implant and Normal-Hearing Listeners: Is the Performance Driven by Temporal or Spectral Modulation Sensitivity? Trends Hear 2020; 24:2331216520948385. [PMID: 32895024 PMCID: PMC7482033 DOI: 10.1177/2331216520948385] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
This study examined the contribution of temporal and spectral modulation sensitivity to discrimination of stimuli modulated in both the time and frequency domains. The spectrotemporally modulated stimuli contained spectral ripples that shifted systematically across frequency over time at a repetition rate of 5 Hz. As the ripple density increased in the stimulus, modulation depth of the 5 Hz amplitude modulation (AM) reduced. Spectrotemporal modulation discrimination was compared with subjects’ ability to discriminate static spectral ripples and the ability to detect slow AM. The general pattern from both the cochlear implant (CI) and normal hearing groups showed that spectrotemporal modulation thresholds were correlated more strongly with AM detection than with static ripple discrimination. CI subjects’ spectrotemporal modulation thresholds were also highly correlated with speech recognition in noise, when partialing out static ripple discrimination, but the correlation was not significant when partialing out AM detection. The results indicated that temporal information was more heavily weighted in spectrotemporal modulation discrimination, and for CI subjects, it was AM sensitivity that drove the correlation between spectrotemporal modulation thresholds and speech recognition. The results suggest that for the rates tested here, temporal information processing may limit performance more than spectral information processing in both CI users and normal hearing listeners.
Collapse
Affiliation(s)
- Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Susannah Dixon
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Zhen Zhu
- Department of Engineering, East Carolina University, Greenville, North Carolina, United States
| | - Lixue Dong
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| | - Marti Weiner
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina, United States
| |
Collapse
|
26
|
Jorgensen EJ, McCreery RW, Kirby BJ, Brennan M. Effect of level on spectral-ripple detection threshold for listeners with normal hearing and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:908. [PMID: 32873021 PMCID: PMC7443170 DOI: 10.1121/10.0001706] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 07/07/2020] [Accepted: 07/20/2020] [Indexed: 06/11/2023]
Abstract
This study investigated the effect of presentation level on spectral-ripple detection for listeners with and without sensorineural hearing loss (SNHL). Participants were 25 listeners with normal hearing and 25 listeners with SNHL. Spectral-ripple detection thresholds (SRDTs) were estimated at three spectral densities (0.5, 2, and 4 ripples per octave, RPO) and three to four sensation levels (SLs) (10, 20, 40, and, when possible, 60 dB SL). Each participant was also tested at 90 dB sound pressure level (SPL). Results indicate that level affected SRDTs. However, the effect of level depended on ripple density and hearing status. For all listeners and all RPO conditions, SRDTs improved from 10 to 40 dB SL. In the 2- and 4-RPO conditions, SRDTs became poorer from the 40 dB SL to the 90 dB SPL condition. The results suggest that audibility likely controls spectral-ripple detection at low SLs for all ripple densities, whereas spectral resolution likely controls spectral-ripple detection at high SLs and ripple densities. For optimal ripple detection across all listeners, clinicians and researchers should use a SL of 40 dB SL. To avoid absolute-level confounds, a presentation level of 80 dB SPL can also be used.
Collapse
Affiliation(s)
- Erik J Jorgensen
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa 52242, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Omaha, Nebraska 68124, USA
| | - Benjamin J Kirby
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton, Texas 76203, USA
| | - Marc Brennan
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln, Lincoln, Nebraska 68588, USA
| |
Collapse
|
27
|
Abstract
OBJECTIVES The Quick Spectral Modulation Detection (QSMD) test provides a quick and clinically implementable spectral resolution estimate for cochlear implant (CI) users. However, the original QSMD software (QSMD(MySound)) has technical and usability limitations that prevent widespread distribution and implementation. In this article, we introduce a new software package EasyQSMD, which is freely available software with the goal of both simplifying and standardizing spectral resolution measurements. DESIGN QSMD was measured for 20 CI users using both software packages. RESULTS No differences between the two software packages were detected, and based on the 95% confidence interval of the difference between tests, the difference between the tests is expected to be <2% points. The average test duration was under 4 minutes. CONCLUSIONS EasyQSMD is considered functionally equivalent to QSMD(MySound) providing a clinically feasible and quick estimate of spectral resolution for CI users.
Collapse
|
28
|
Forward masking patterns by low and high-rate stimulation in cochlear implant users: Differences in masking effectiveness and spread of neural excitation. Hear Res 2020; 389:107921. [PMID: 32097828 DOI: 10.1016/j.heares.2020.107921] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 01/15/2020] [Accepted: 02/13/2020] [Indexed: 11/20/2022]
Abstract
The goal of the present study was to compare forward masking patterns by stimulation of low and high rates in cochlear implant users. Postlingually deafened Cochlear Nucleus® device users participated in the study. In experiment 1, two maskers of different rates (250 and 1000 pulses per second) were set at levels that produced equal masking for a probe presented at the same electrode as the maskers. This aligned the two masking functions at the on-site probe location. Then their forward masking patterns for the far probes were compared. Results showed that slope of the masked probe-threshold decay as a function of probe-masker separation was steeper for the high-rate than the low-rate masker. A linear model indicated that this difference in spread of neural excitation (SOE) was accounted for by two factors that were not correlated with each other. One factor was that the low-rate masker required a considerably higher current level to be equally effective in masking as the high-rate masker. The second factor was the effect of stimulation rate on loudness, i.e., integration of multiple pulses. This was consistent with our hypothesis that if an increase in stimulation rate does not result in an increased total neural response, then it is unlikely that the change in rate would change spatial distribution of the neural activity. Interestingly, the difference in masking effectiveness of the maskers predicted subjects' speech recognition. Poorer performers were those who showed more comparable masking effects by maskers of different rates. The difference in the masking effectiveness may indirectly measure the auditory neurons' excitability, which predicts speech recognition. In experiment 2, SOE of the high-rate and low-rate maskers were compared at a level that is clinically relevant, i.e., equal loudness. At equal loudness, high-rate stimulation not only produced an overall greater amount of forward masking, but also a shallower decay of masking with probe-masker separation (wider SOE), compared to low rate. The difference in SOE was the opposite to the findings from experiment 1. Whether the maskers were calibrated for equal masking or loudness, the absolute current level was always higher for the low-rate masker, which suggests that the SOE patterns cannot be explained by current spread alone. The fact that high-rate stimulation produced greater masking and wider SOE at equal loudness may explain why using high stimulation rates has not produced consistent benefits for speech recognition, and why lowering stimulation rate from the manufacturer's default sometimes results in improved speech recognition for subjects.
Collapse
|
29
|
Resnick JM, Horn DL, Noble AR, Rubinstein JT. Spectral aliasing in an acoustic spectral ripple discrimination task. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:1054. [PMID: 32113324 PMCID: PMC7112708 DOI: 10.1121/10.0000608] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Spectral ripple discrimination tasks are commonly used to probe spectral resolution in cochlear implant (CI), normal-hearing (NH), and hearing-impaired individuals. In addition, these tasks have also been used to examine spectral resolution development in NH and CI children. In this work, stimulus sine-wave carrier density was identified as a critical variable in an example spectral ripple-based task, the Spectro-Temporally Modulated Ripple (SMR) Test, and it was demonstrated that previous uses of it in NH listeners sometimes used values insufficient to represent relevant ripple densities. Insufficient carry densities produced spectral under-sampling that both eliminated ripple cues at high ripple densities and introduced unintended structured interference between the carriers and intended ripples at particular ripple densities. It was found that this effect produced non-monotonic psychometric functions for NH listeners that would cause systematic underestimation of thresholds with adaptive techniques. Studies of spectral ripple detection in CI users probe a density regime below where this source of aliasing occurs, as CI signal processing limits dense ripple representation. While these analyses and experiments focused on the SMR Test, any task in which discrete pure-tone carriers spanning frequency space are modulated to approximate a desired pattern must be designed with the consideration of the described spectral aliasing effect.
Collapse
Affiliation(s)
- Jesse M Resnick
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - David L Horn
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Anisha R Noble
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| | - Jay T Rubinstein
- Department of Otolaryngology-Head and Neck Surgery, University of Washington, Box 357923, Seattle, Washington 98195-7923, USA
| |
Collapse
|
30
|
Nechaev DI, Milekhina ON, Supin AY. Estimates of Ripple-Density Resolution Based on the Discrimination From Rippled and Nonrippled Reference Signals. Trends Hear 2019; 23:2331216518824435. [PMID: 30669951 DOI: 10.1177/2331216518824435] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Rippled-spectrum stimuli are used to evaluate the resolution of the spectro-temporal structure of sounds. Measurements of spectrum-pattern resolution imply the discrimination between the test and reference stimuli. Therefore, estimates of rippled-pattern resolution could depend on both the test stimulus and the reference stimulus type. In this study, the ripple-density resolution was measured using combinations of two test stimuli and two reference stimuli. The test stimuli were rippled-spectrum signals with constant phase or rippled-spectrum signals with ripple-phase reversals. The reference stimuli were rippled-spectrum signals with opposite ripple phase to the test or nonrippled signals. The spectra were centered at 2 kHz and had an equivalent rectangular bandwidth of 1 oct and a level of 70 dB sound pressure level. A three-alternative forced-choice procedure was combined with an adaptive procedure. With rippled reference stimuli, the mean ripple-density resolution limits were 8.9 ripples/oct (phase-reversals test stimulus) or 7.7 ripples/oct (constant-phase test stimulus). With nonrippled reference stimuli, the mean resolution limits were 26.1 ripples/oct (phase-reversals test stimulus) or 22.2 ripples/oct (constant-phase test stimulus). Different contributions of excitation-pattern and temporal-processing mechanisms are assumed for measurements with rippled and nonrippled reference stimuli: The excitation-pattern mechanism is more effective for the discrimination of rippled stimuli that differ in their ripple-phase patterns, whereas the temporal-processing mechanism is more effective for the discrimination of rippled and nonrippled stimuli.
Collapse
Affiliation(s)
- Dmitry I Nechaev
- 1 Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| | - Olga N Milekhina
- 1 Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| | - Alexander Ya Supin
- 1 Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
31
|
Jeddi Z, Lotfi Y, Moossavi A, Bakhshi E, Hashemi SB. Correlation between Auditory Spectral Resolution and Speech Perception in Children with Cochlear Implants. IRANIAN JOURNAL OF MEDICAL SCIENCES 2019; 44:382-389. [PMID: 31582862 PMCID: PMC6754529 DOI: 10.30476/ijms.2019.44967] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
Abstract
Background: Variability in speech performance is a major concern for children with cochlear implants (CIs). Spectral resolution is an important acoustic component in speech perception. Considerable variability and limitations of spectral resolution in children with CIs may lead to individual differences in speech performance. The aim of this study was to assess the correlation between auditory spectral resolution and speech perception in pediatric CI users.
Methods: This cross-sectional study was conducted in Shiraz, Iran, in 2017. The frequency discrimination threshold (FDT) and the spectral-temporal modulated ripple discrimination threshold (SMRT) were measured for 75 pre-lingual hearing-impaired children with CIs (age=8-12 y). Word recognition and sentence perception tests were completed to assess speech perception. The Pearson correlation analysis and multiple linear regression analysis were used to determine the correlation between the variables and to determine the predictive variables of speech perception, respectively.
Results: There was a significant correlation between the SMRT and word recognition (r=0.573 and P<0.001). The FDT was significantly correlated with word recognition (r=0.487 and P<0.001). Sentence perception had a significant correlation with the SMRT and the FDT. There was a significant correlation between chronological age and age at implantation with SMRT but not the FDT.
Conclusion: Auditory spectral resolution correlated well with speech perception among our children with CIs. Spectral resolution ability accounted for approximately 40% of the variance in speech perception among the children with CIs.
Collapse
Affiliation(s)
- Zahra Jeddi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Younes Lotfi
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Abdollah Moossavi
- Department of Otolaryngology and Head and Neck Surgery, School of Medicine, Iran University of Medical Sciences, Tehran, Iran
| | - Enayatollah Bakhshi
- Department of Biostatistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Seyed Basir Hashemi
- Department of Otolaryngology, Khalili Hospital, Shiraz University of Medical Sciences, Shiraz, Iran
| |
Collapse
|
32
|
Milekhina ON, Nechaev DI, Supin AY. Rippled-spectrum resolution dependence on frequency: Estimates obtained by discrimination from rippled and nonrippled reference signals. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2231. [PMID: 31672006 DOI: 10.1121/1.5127835] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/13/2019] [Accepted: 09/11/2019] [Indexed: 06/10/2023]
Abstract
The resolution of spectral ripples is a useful test for the spectral resolution of hearing. However, the use of different measurement paradigms might yield diverging results because of a paradigm-dependent contribution of excitation-pattern and temporal-processing mechanisms. In the present study, ripple-density resolution was measured in normal-hearing listeners for several frequency bands (centered at 0.5, 1, 2, and 4 kHz), using two paradigms: (i) discrimination of a rippled-spectrum test signal from a rippled reference signal differing by the ripple phase pattern, and (ii) discrimination of a rippled-spectrum test signal from a nonrippled reference signal. For the rippled reference signals, the resolution slightly depended on signal frequency. For the nonrippled reference signals, the resolution depended on the signal frequency; it varied from 8.8 ripples/oct at 0.5 kHz to 34.2 ripples/oct at 4 kHz. Excitation-pattern and temporal-processing models of spectral analysis were considered. Predictions of the excitation-pattern model agreed with the data obtained with the rippled reference signals. In contrast, predictions of the temporal-processing model agreed with the data obtained with the nonrippled reference signals. Thus, depending on the used reference signal type, the ripple-density resolution estimates characterize the discrimination abilities of the corresponding mechanisms.
Collapse
Affiliation(s)
- Olga N Milekhina
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow 119071, Russia
| | - Dmitry I Nechaev
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow 119071, Russia
| | - Alexander Ya Supin
- Institute of Ecology and Evolution, Russian Academy of Sciences, Moscow 119071, Russia
| |
Collapse
|
33
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
34
|
Croghan NBH, Smith ZM. Speech Understanding With Various Maskers in Cochlear-Implant and Simulated Cochlear-Implant Hearing: Effects of Spectral Resolution and Implications for Masking Release. Trends Hear 2019; 22:2331216518787276. [PMID: 30022730 PMCID: PMC6053854 DOI: 10.1177/2331216518787276] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The purpose of this study was to investigate the relationship between psychophysical spectral resolution and sentence reception in various types of interfering backgrounds for listeners with cochlear implants and normal-hearing subjects listening to vocoded speech. Spectral resolution was measured with a spectral modulation detection (SMD) task. For speech testing, maskers included stationary speech-shaped noise (SSN), four-talker babble, multitone noise, and a competing talker. To explore the possible trade-offs between spectral resolution and susceptibility to different types of maskers, the degree of simulated current spread was varied within the vocoder group, achieving a range of performance for SMD and speech tasks. Greater simulated current spread was detrimental to both spectral resolution and speech recognition, suggesting that interventions that decrease current spread may improve performance for both tasks. Better SMD sensitivity was significantly correlated with improved sentence reception. In addition, differences in sentence reception across the four maskers were significantly associated with SMD across the combined group of cochlear-implant and vocoder subjects. Masking release (MR) was quantified as the signal-to-noise ratio difference in speech reception threshold between the SSN and competing talker. Several individual cochlear-implant subjects demonstrated substantial MR, in contrast to previous studies, and the degree of MR increased with better SMD thresholds across subjects. The results of this study suggest that alternative masker types, particularly competing talkers, are more sensitive than stationary SSN to differences in spectral resolution in the cochlear-implant population.
Collapse
Affiliation(s)
- Naomi B H Croghan
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,2 Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, CO, USA
| | - Zachary M Smith
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,3 Department of Physiology and Biophysics, School of Medicine, University of Colorado, Aurora, CO, USA
| |
Collapse
|
35
|
Gifford RH, Noble JH, Camarata SM, Sunderhaus LW, Dwyer RT, Dawant BM, Dietrich MS, Labadie RF. The Relationship Between Spectral Modulation Detection and Speech Recognition: Adult Versus Pediatric Cochlear Implant Recipients. Trends Hear 2019; 22:2331216518771176. [PMID: 29716437 PMCID: PMC5949922 DOI: 10.1177/2331216518771176] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Adult cochlear implant (CI) recipients demonstrate a reliable relationship between spectral modulation detection and speech understanding. Prior studies documenting this relationship have focused on postlingually deafened adult CI recipients—leaving an open question regarding the relationship between spectral resolution and speech understanding for adults and children with prelingual onset of deafness. Here, we report CI performance on the measures of speech recognition and spectral modulation detection for 578 CI recipients including 477 postlingual adults, 65 prelingual adults, and 36 prelingual pediatric CI users. The results demonstrated a significant correlation between spectral modulation detection and various measures of speech understanding for 542 adult CI recipients. For 36 pediatric CI recipients, however, there was no significant correlation between spectral modulation detection and speech understanding in quiet or in noise nor was spectral modulation detection significantly correlated with listener age or age at implantation. These findings suggest that pediatric CI recipients might not depend upon spectral resolution for speech understanding in the same manner as adult CI recipients. It is possible that pediatric CI users are making use of different cues, such as those contained within the temporal envelope, to achieve high levels of speech understanding. Further investigation is warranted to investigate the relationship between spectral and temporal resolution and speech recognition to describe the underlying mechanisms driving peripheral auditory processing in pediatric CI users.
Collapse
Affiliation(s)
- René H Gifford
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Jack H Noble
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Stephen M Camarata
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Linsey W Sunderhaus
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert T Dwyer
- 1 Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Benoit M Dawant
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| | - Mary S Dietrich
- 4 Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Robert F Labadie
- 2 Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA.,3 Department of Electrical Engineering and Computer Science, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
36
|
Miller CW, Bernstein JGW, Zhang X, Wu YH, Bentler RA, Tremblay K. The Effects of Static and Moving Spectral Ripple Sensitivity on Unaided and Aided Speech Perception in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:3113-3126. [PMID: 30515519 PMCID: PMC6440313 DOI: 10.1044/2018_jslhr-h-17-0373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 06/06/2018] [Accepted: 08/04/2018] [Indexed: 05/26/2023]
Abstract
PURPOSE This study evaluated whether certain spectral ripple conditions were more informative than others in predicting ecologically relevant unaided and aided speech outcomes. METHOD A quasi-experimental study design was used to evaluate 67 older adult hearing aid users with bilateral, symmetrical hearing loss. Speech perception in noise was tested under conditions of unaided and aided, auditory-only and auditory-visual, and 2 types of noise. Predictors included age, audiometric thresholds, audibility, hearing aid compression, and modulation depth detection thresholds for moving (4-Hz) or static (0-Hz) 2-cycle/octave spectral ripples applied to carriers of broadband noise or 2000-Hz low- or high-pass filtered noise. RESULTS A principal component analysis of the modulation detection data found that broadband and low-pass static and moving ripple detection thresholds loaded onto the first factor whereas high-pass static and moving ripple detection thresholds loaded onto a second factor. A linear mixed model revealed that audibility and the first factor (reflecting broadband and low-pass static and moving ripples) were significantly associated with speech perception performance. Similar results were found for unaided and aided speech scores. The interactions between speech conditions were not significant, suggesting that the relationship between ripples and speech perception was consistent regardless of visual cues or noise condition. High-pass ripple sensitivity was not correlated with speech understanding. CONCLUSIONS The results suggest that, for hearing aid users, poor speech understanding in noise and sensitivity to both static and slow-moving ripples may reflect deficits in the same underlying auditory processing mechanism. Significant factor loadings involving ripple stimuli with low-frequency content may suggest an impaired ability to use temporal fine structure information in the stimulus waveform. Support is provided for the use of spectral ripple testing to predict speech perception outcomes in clinical settings.
Collapse
Affiliation(s)
- Christi W. Miller
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| | - Joshua G. W. Bernstein
- National Military Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Xuyang Zhang
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Ruth A. Bentler
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - Kelly Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle
| |
Collapse
|
37
|
Speech Perception with Spectrally Non-overlapping Maskers as Measure of Spectral Resolution in Cochlear Implant Users. J Assoc Res Otolaryngol 2018; 20:151-167. [PMID: 30456730 DOI: 10.1007/s10162-018-00702-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 10/07/2018] [Indexed: 10/27/2022] Open
Abstract
Poor spectral resolution contributes to the difficulties experienced by cochlear implant (CI) users when listening to speech in noise. However, correlations between measures of spectral resolution and speech perception in noise have not always been found to be robust. It may be that the relationship between spectral resolution and speech perception in noise becomes clearer in conditions where the speech and noise are not spectrally matched, so that improved spectral resolution can assist in separating the speech from the masker. To test this prediction, speech intelligibility was measured with noise or tone maskers that were presented either in the same spectral channels as the speech or in interleaved spectral channels. Spectral resolution was estimated via a spectral ripple discrimination task. Results from vocoder simulations in normal-hearing listeners showed increasing differences in speech intelligibility between spectrally overlapped and interleaved maskers as well as improved spectral ripple discrimination with increasing spectral resolution. However, no clear differences were observed in CI users between performance with spectrally interleaved and overlapped maskers, or between tone and noise maskers. The results suggest that spectral resolution in current CIs is too poor to take advantage of the spectral separation produced by spectrally interleaved speech and maskers. Overall, the spectrally interleaved and tonal maskers produce a much larger difference in performance between normal-hearing listeners and CI users than do traditional speech-in-noise measures, and thus provide a more sensitive test of speech perception abilities for current and future implantable devices.
Collapse
|
38
|
The effect of presentation level on spectrotemporal modulation detection. Hear Res 2018; 371:11-18. [PMID: 30439570 DOI: 10.1016/j.heares.2018.10.017] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Revised: 10/23/2018] [Accepted: 10/29/2018] [Indexed: 11/24/2022]
Abstract
The understanding of speech in noise relies (at least partially) on spectrotemporal modulation sensitivity. This sensitivity can be measured by spectral ripple tests, which can be administered at different presentation levels. However, it is not known how presentation level affects spectrotemporal modulation thresholds. In this work, we present behavioral data for normal-hearing adults which show that at higher ripple densities (2 and 4 ripples/oct), increasing presentation level led to worse discrimination thresholds. Results of a computational model suggested that the higher thresholds could be explained by a worsening of the spectrotemporal representation in the auditory nerve due to broadening of cochlear filters and neural activity saturation. Our results demonstrate the importance of taking presentation level into account when administering spectrotemporal modulation detection tests.
Collapse
|
39
|
Archer-Boyd AW, Southwell RV, Deeks JM, Turner RE, Carlyon RP. Development and validation of a spectro-temporal processing test for cochlear-implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 144:2983. [PMID: 30522311 PMCID: PMC6805218 DOI: 10.1121/1.5079636] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2018] [Accepted: 11/01/2018] [Indexed: 06/06/2023]
Abstract
Psychophysical tests of spectro-temporal resolution may aid the evaluation of methods for improving hearing by cochlear implant (CI) listeners. Here the STRIPES (Spectro-Temporal Ripple for Investigating Processor EffectivenesS) test is described and validated. Like speech, the test requires both spectral and temporal processing to perform well. Listeners discriminate between complexes of sine sweeps which increase or decrease in frequency; difficulty is controlled by changing the stimulus spectro-temporal density. Care was taken to minimize extraneous cues, forcing listeners to perform the task only on the direction of the sweeps. Vocoder simulations with normal hearing listeners showed that the STRIPES test was sensitive to the number of channels and temporal information fidelity. An evaluation with CI listeners compared a standard processing strategy with one having very wide filters, thereby spectrally blurring the stimulus. Psychometric functions were monotonic for both strategies and five of six participants performed better with the standard strategy. An adaptive procedure revealed significant differences, all in favour of the standard strategy, at the individual listener level for six of eight CI listeners. Subsequent measures validated a faster version of the test, and showed that STRIPES could be performed by recently implanted listeners having no experience of psychophysical testing.
Collapse
Affiliation(s)
- Alan W. Archer-Boyd
- MRC Cognition & Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Rosy V. Southwell
- MRC Cognition & Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - John M. Deeks
- MRC Cognition & Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Richard E. Turner
- MRC Cognition & Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Robert P. Carlyon
- MRC Cognition & Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| |
Collapse
|
40
|
McKay CM, Rickard N, Henshall K. Intensity Discrimination and Speech Recognition of Cochlear Implant Users. J Assoc Res Otolaryngol 2018; 19:589-600. [PMID: 29777327 DOI: 10.1007/s10162-018-0675-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2017] [Accepted: 05/07/2018] [Indexed: 12/23/2022] Open
Abstract
The relation between speech recognition and within-channel or across-channel (i.e., spectral tilt) intensity discrimination was measured in nine CI users (11 ears). Within-channel intensity difference limens (IDLs) were measured at four electrode locations across the electrode array. Spectral tilt difference limens were measured with (XIDL-J) and without (XIDL) level jitter. Only three subjects could perform the XIDL-J task with the amount of jitter required to limit use of within-channel cues. XIDLs (normalized to %DR) were correlated with speech recognition (r = 0.67, P = 0.019) and were highly correlated with IDLs. XIDLs were on average nearly 3 times larger than IDLs and did not vary consistently with the spatial separation of the two component electrodes. The overall pattern of results was consistent with a common underlying subject-dependent limitation in the two difference limen tasks, hypothesized to be perceptual variance (how the perception of a sound differs on different presentations), which may also underlie the correlation of XIDLs with speech recognition. Evidence that spectral tilt discrimination is more important for speech recognition than within-channel intensity discrimination was not unequivocally shown in this study. However, the results tended to support this proposition, with XIDLs more correlated with speech performance than IDLs, and the ratio XIDL/IDL also being correlated with speech recognition. If supported by further research, the importance of perceptual variance as a limiting factor in speech understanding for CI users has important implications for efforts to improve outcomes for those with poor speech recognition.
Collapse
Affiliation(s)
- Colette M McKay
- Bionics Institute, 384-388 Albert St, East Melbourne, 3002, Australia. .,Department of Medical Bionics, The University of Melbourne, Melbourne, Australia.
| | - Natalie Rickard
- Bionics Institute, 384-388 Albert St, East Melbourne, 3002, Australia
| | | |
Collapse
|
41
|
|
42
|
Jürgens T, Hohmann V, Büchner A, Nogueira W. The effects of electrical field spatial spread and some cognitive factors on speech-in-noise performance of individual cochlear implant users-A computer model study. PLoS One 2018; 13:e0193842. [PMID: 29652892 PMCID: PMC5898708 DOI: 10.1371/journal.pone.0193842] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 02/19/2018] [Indexed: 11/18/2022] Open
Abstract
The relation of the individual speech-in-noise performance differences in cochlear implant (CI) users to underlying physiological factors is currently poorly understood. This study approached this research question by a step-wise individualization of a computer model of speech intelligibility mimicking the details of CI signal processing and some details of the physiology present in CI users. Two factors, the electrical field spatial spread and internal noise (as a coarse model of the individual cognitive performance) were incorporated. Internal representations of speech-in-noise mixtures calculated by the model were classified using an automatic speech recognizer backend employing Hidden Markov Models with a Gaussian probability distribution. One-dimensional electric field spatial spread functions were inferred from electrical field imaging data of 14 CI users. Simplified assumptions of homogenously distributed auditory nerve fibers along the cochlear array and equal distance between electrode array and nerve tissue were assumed in the model. Internal noise, whose standard deviation was adjusted based on either anamnesis data, or text-reception-threshold data, or a combination thereof, was applied to the internal representations before classification. A systematic model evaluation showed that predicted speech-reception-thresholds (SRTs) in stationary noise improved (decreased) with decreasing internal noise standard deviation and with narrower electric field spatial spreads. The model version that was individualized to actual listeners using internal noise alone (containing average spatial spread) showed significant correlations to measured SRTs, reflecting the high correlation of the text-reception threshold data with SRTs. However, neither individualization to spatial spread functions alone, nor a combined individualization based on spatial spread functions and internal noise standard deviation did produce significant correlations with measured SRTs.
Collapse
Affiliation(s)
- Tim Jürgens
- Medizinische Physik, Cluster of Excellence “Hearing4all” and Forschungszentrum Neurosensorik, Carl-von-Ossietzky Universität Oldenburg, Germany
- * E-mail:
| | - Volker Hohmann
- Medizinische Physik, Cluster of Excellence “Hearing4all” and Forschungszentrum Neurosensorik, Carl-von-Ossietzky Universität Oldenburg, Germany
| | - Andreas Büchner
- Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, Germany
| | - Waldo Nogueira
- Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, Germany
| |
Collapse
|
43
|
Buss E, Grose J. Auditory sensitivity to spectral modulation phase reversal as a function of modulation depth. PLoS One 2018; 13:e0195686. [PMID: 29621338 PMCID: PMC5886689 DOI: 10.1371/journal.pone.0195686] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2017] [Accepted: 03/27/2018] [Indexed: 11/19/2022] Open
Abstract
The present study evaluated auditory sensitivity to spectral modulation by determining the modulation depth required to detect modulation phase reversal. This approach may be preferable to spectral modulation detection with a spectrally flat standard, since listeners appear unable to perform the task based on the detection of temporal modulation. While phase reversal thresholds are often evaluated by holding modulation depth constant and adjusting modulation rate, holding rate constant and adjusting modulation depth supports rate-specific assessment of modulation processing. Stimuli were pink noise samples, filtered into seven octave-wide bands (0.125–8 kHz) and spectrally modulated in dB. Experiment 1 measured performance as a function of modulation depth to determine appropriate units for adaptive threshold estimation. Experiment 2 compared thresholds in dB for modulation detection with a flat standard and modulation phase reversal; results supported the idea that temporal cues were available at high rates for the former but not the latter. Experiment 3 evaluated spectral modulation phase reversal thresholds for modulation that was restricted to either one or two neighboring bands. Flanking bands of unmodulated noise had a larger detrimental effect on one-band than two-band targets. Thresholds for high-rate modulation improved with increasing carrier frequency up to 2 kHz, whereas low-rate modulation appeared more consistent across frequency, particularly in the two-band condition. Experiment 4 measured spectral weights for spectral modulation phase reversal detection and found higher weights for bands in the spectral center of the stimulus than for the lowest (0.125 kHz) or highest (8 kHz) band. Experiment 5 compared performance for highly practiced and relatively naïve listeners, and found weak evidence of a larger practice effect at high than low spectral modulation rates. These results provide preliminary data for a task that may provide a better estimate of sensitivity to spectral modulation than spectral modulation detection with a flat standard.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
- * E-mail:
| | - John Grose
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, United States of America
| |
Collapse
|
44
|
Nechaev DI, Milekhina ON, Supin AY. Hearing sensitivity to gliding rippled spectrum patterns. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:2387. [PMID: 29716251 DOI: 10.1121/1.5033898] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The sensitivity of human hearing to gliding rippled spectrum patterns of sound was investigated. The test signal was 2-oct wide rippled noise with the ripples gliding along the frequency scale. Both ripple density and gliding velocity were frequency-proportional across the signal band; i.e., the density was specified in ripples/oct and the velocity was specified in oct/s and ripple/s. The listener was required to discriminate between a test signal with gliding ripples and a non-rippled reference signal. Limits of gliding velocity were measured as a function of ripple density. The ripple gliding velocity limit decreased with an increasing ripple density: from 388.9 oct/s (388.9 ripple/s) at a ripple density of 1 ripple/oct to 11.3 oct/s (79.1 ripple/s) at a density of 7 ripple/oct. These tendencies could be approximated by log/log regression functions with slopes of 1.71 for the velocity expressed in oct/s and 0.71 for the velocity expressed in ripple/s. A qualitative model based on combined action of the excitation-pattern and the temporal-processing mechanism is suggested to explain the results.
Collapse
Affiliation(s)
- Dmitry I Nechaev
- Institute of Ecology and Evolution of the Russian Academy of Sciences, 33 Leninsky prospect, 119071 Moscow, Russia
| | - Olga N Milekhina
- Institute of Ecology and Evolution of the Russian Academy of Sciences, 33 Leninsky prospect, 119071 Moscow, Russia
| | - Alexander Ya Supin
- Institute of Ecology and Evolution of the Russian Academy of Sciences, 33 Leninsky prospect, 119071 Moscow, Russia
| |
Collapse
|
45
|
Abstract
OBJECTIVES Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. DESIGN SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). RESULTS In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 dB depth suggesting that variability of infants' use of within-channel intensity cues, rather than better frequency resolution, explained the results of experiment 1. In experiment 3, age effects were seen with both groups of infants showing poorer SRD than adults but, unlike experiment 1, no significant interaction between age and depth was seen. CONCLUSIONS Measurement of SRD thresholds in individual 3 to 7-month-old infants is feasible. Performance of normal-hearing infants on SRD may be limited by across-channel intensity resolution despite mature frequency resolution. These findings have significant implications for design and stimulus choice for applying SRD for testing infants with CIs. The high degree of variability in infant SRD can be somewhat reduced by obscuring within-channel cues.
Collapse
Affiliation(s)
- David L Horn
- 1Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington, USA; 2Division of Otolaryngology, Seattle Children's Hospital, Seattle, Wahington, USA; and 3Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | | | | | | |
Collapse
|
46
|
Relationship between spectrotemporal modulation detection and music perception in normal-hearing, hearing-impaired, and cochlear implant listeners. Sci Rep 2018; 8:800. [PMID: 29335454 PMCID: PMC5768867 DOI: 10.1038/s41598-017-17350-w] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2017] [Accepted: 11/21/2017] [Indexed: 11/25/2022] Open
Abstract
The objective of this study was to examine the relationship between spectrotemporal modulation (STM) sensitivity and the ability to perceive music. Ten normal-hearing (NH) listeners, ten hearing aid (HA) users with moderate hearing loss, and ten cochlear Implant (CI) users participated in this study. Three different types of psychoacoustic tests including spectral modulation detection (SMD), temporal modulation detection (TMD), and STM were administered. Performances on these psychoacoustic tests were compared to music perception abilities. In addition, psychoacoustic mechanisms involved in the improvement of music perception through HA were evaluated. Music perception abilities in unaided and aided conditions were measured for HA users. After that, HA benefit for music perception was correlated with aided psychoacoustic performance. STM detection study showed that a combination of spectral and temporal modulation cues were more strongly correlated with music perception abilities than spectral or temporal modulation cues measured separately. No correlation was found between music perception performance and SMD threshold or TMD threshold in each group. Also, HA benefits for melody and timbre identification were significantly correlated with a combination of spectral and temporal envelope cues though HA.
Collapse
|
47
|
Zheng Y, Escabí M, Litovsky RY. Spectro-temporal cues enhance modulation sensitivity in cochlear implant users. Hear Res 2017; 351:45-54. [PMID: 28601530 DOI: 10.1016/j.heares.2017.05.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/11/2017] [Revised: 05/12/2017] [Accepted: 05/23/2017] [Indexed: 10/19/2022]
Abstract
Although speech understanding is highly variable amongst cochlear implants (CIs) subjects, the remarkably high speech recognition performance of many CI users is unexpected and not well understood. Numerous factors, including neural health and degradation of the spectral information in the speech signal of CIs, likely contribute to speech understanding. We studied the ability to use spectro-temporal modulations, which may be critical for speech understanding and discrimination, and hypothesize that CI users adopt a different perceptual strategy than normal-hearing (NH) individuals, whereby they rely more heavily on joint spectro-temporal cues to enhance detection of auditory cues. Modulation detection sensitivity was studied in CI users and NH subjects using broadband "ripple" stimuli that were modulated spectrally, temporally, or jointly, i.e., spectro-temporally. The spectro-temporal modulation transfer functions of CI users and NH subjects was decomposed into spectral and temporal dimensions and compared to those subjects' spectral-only and temporal-only modulation transfer functions. In CI users, the joint spectro-temporal sensitivity was better than that predicted by spectral-only and temporal-only sensitivity, indicating a heightened spectro-temporal sensitivity. Such an enhancement through the combined integration of spectral and temporal cues was not observed in NH subjects. The unique use of spectro-temporal cues by CI patients can yield benefits for use of cues that are important for speech understanding. This finding has implications for developing sound processing strategies that may rely on joint spectro-temporal modulations to improve speech comprehension of CI users, and the findings of this study may be valuable for developing clinical assessment tools to optimize CI processor performance.
Collapse
Affiliation(s)
- Yi Zheng
- Waisman Center, University of Wisconsin Madison, 1500 Highland Avenue, Madison, WI, 53705, USA
| | - Monty Escabí
- Biomedical Engineering, Electrical and Computer Engineering, University of Connecticut, 371 Fairfield Rd., U1157, Storrs, CT, 06269, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin Madison, 1500 Highland Avenue, Madison, WI, 53705, USA.
| |
Collapse
|
48
|
van de Velde DJ, Schiller NO, van Heuven VJ, Levelt CC, van Ginkel J, Beers M, Briaire JJ, Frijns JHM. The perception of emotion and focus prosody with varying acoustic cues in cochlear implant simulations with varying filter slopes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3349. [PMID: 28599540 PMCID: PMC5436976 DOI: 10.1121/1.4982198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 03/15/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
This study aimed to find the optimal filter slope for cochlear implant simulations (vocoding) by testing the effect of a wide range of slopes on the discrimination of emotional and linguistic (focus) prosody, with varying availability of F0 and duration cues. Forty normally hearing participants judged if (non-)vocoded sentences were pronounced with happy or sad emotion, or with adjectival or nominal focus. Sentences were recorded as natural stimuli and manipulated to contain only emotion- or focus-relevant segmental duration or F0 information or both, and then noise-vocoded with 5, 20, 80, 120, and 160 dB/octave filter slopes. Performance increased with steeper slopes, but only up to 120 dB/octave, with bigger effects for emotion than for focus perception. For emotion, results with both cues most closely resembled results with F0, while for focus results with both cues most closely resembled those with duration, showing emotion perception relies primarily on F0, and focus perception on duration. This suggests that filter slopes affect focus perception less than emotion perception because for emotion, F0 is both more informative and more affected. The performance increase until extreme filter slope values suggests that much performance improvement in prosody perception is still to be gained for CI users.
Collapse
Affiliation(s)
- Daan J van de Velde
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Niels O Schiller
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Vincent J van Heuven
- Department of Applied Linguistics, Pannon Egyetem, 10 Egyetem Utca, 8200 Veszprém, Hungary
| | - Claartje C Levelt
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Joost van Ginkel
- Leiden University Centre for Child and Family Studies, Wassenaarseweg 52, 2333 AK, Leiden, the Netherlands
| | - Mieke Beers
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| | - Jeroen J Briaire
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| | - Johan H M Frijns
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| |
Collapse
|
49
|
Kreft HA, Oxenham AJ. Auditory Enhancement in Cochlear-Implant Users Under Simultaneous and Forward Masking. J Assoc Res Otolaryngol 2017; 18:483-493. [PMID: 28303412 DOI: 10.1007/s10162-017-0618-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2016] [Accepted: 02/28/2017] [Indexed: 11/28/2022] Open
Abstract
Auditory enhancement is the phenomenon whereby the salience or detectability of a target sound within a masker is enhanced by the prior presentation of the masker alone. Enhancement has been demonstrated using both simultaneous and forward masking in normal-hearing listeners and may play an important role in auditory and speech perception within complex and time-varying acoustic environments. The few studies of enhancement in hearing-impaired listeners have reported reduced or absent enhancement effects under forward masking, suggesting a potentially peripheral locus of the effect. Here, auditory enhancement was measured in eight cochlear-implant (CI) users with direct stimulation. Masked thresholds were measured under simultaneous and forward masking as a function of the number of masking electrodes, and the electrode spacing between the maskers and the target. Evidence for auditory enhancement was obtained under simultaneous masking, qualitatively consistent with results from normal-hearing listeners. However, no significant enhancement was observed under forward masking, in contrast to earlier results with normal-hearing listeners. The results suggest that the normal effects of auditory enhancement are partially but not fully experienced by CI users. To the extent that the CI users' results differ from normal, it may be possible to apply signal processing to restore the missing aspects of enhancement.
Collapse
Affiliation(s)
- Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, MN, 55455, USA.
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, MN, 55455, USA
| |
Collapse
|
50
|
Langner F, Saoji AA, Büchner A, Nogueira W. Adding simultaneous stimulating channels to reduce power consumption in cochlear implants. Hear Res 2017; 345:96-107. [PMID: 28104408 DOI: 10.1016/j.heares.2017.01.010] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/14/2016] [Revised: 01/10/2017] [Accepted: 01/12/2017] [Indexed: 11/30/2022]
|