1
|
Everhardt MK, Sarampalis A, Coler M, Bașkent D, Lowie W. Lexical Stress Identification in Cochlear Implant-Simulated Speech by Non-Native Listeners. LANGUAGE AND SPEECH 2024:238309231222207. [PMID: 38282517 DOI: 10.1177/00238309231222207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
This study investigates whether a presumed difference in the perceptibility of cues to lexical stress in spectro-temporally degraded simulated cochlear implant (CI) speech affects how listeners weight these cues during a lexical stress identification task, specifically in their non-native language. Previous research suggests that in English, listeners predominantly rely on a reduction in vowel quality as a cue to lexical stress. In Dutch, changes in the fundamental frequency (F0) contour seem to have a greater functional weight than the vowel quality contrast. Generally, non-native listeners use the cue-weighting strategies from their native language in the non-native language. Moreover, few studies have suggested that these cues to lexical stress are differently perceptible in spectro-temporally degraded electric hearing, as CI users appear to make more effective use of changes in vowel quality than of changes in the F0 contour as cues to linguistic phenomena. In this study, native Dutch learners of English identified stressed syllables in CI-simulated and non-CI-simulated Dutch and English words that contained changes in the F0 contour and vowel quality as cues to lexical stress. The results indicate that neither the cue-weighting strategies in the native language nor in the non-native language are influenced by the perceptibility of cues in the spectro-temporally degraded speech signal. These results are in contrast to our expectations based on previous research and support the idea that cue weighting is a flexible and transferable process.
Collapse
Affiliation(s)
- Marita K Everhardt
- Center for Language and Cognition Groningen, University of Groningen, The Netherlands; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; Department of Psychology, University of Groningen, The Netherlands
| | - Matt Coler
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; Campus Fryslân, University of Groningen, The Netherlands
| | - Deniz Bașkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; W. J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands
| |
Collapse
|
2
|
Everhardt MK, Sarampalis A, Coler M, Başkent D, Lowie W. Prosodic Focus Interpretation in Spectrotemporally Degraded Speech by Non-Native Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3649-3664. [PMID: 37616276 DOI: 10.1044/2023_jslhr-22-00568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/26/2023]
Abstract
PURPOSE This study assesses how spectrotemporal degradations that can occur in the sound transmission of a cochlear implant (CI) may influence the ability of non-native listeners to recognize the intended meaning of utterances based on the position of the prosodically focused word. Previous research suggests that perceptual accuracy and listening effort are negatively affected by CI processing (or CI simulations) or when the speech is presented in a non-native language, in a number of tasks and circumstances. How these two factors interact to affect prosodic focus interpretation, however, remains unclear. METHOD In an online experiment, normal-hearing (NH) adolescent and adult native Dutch learners of English and a small control group of NH native English adolescents listened to CI-simulated (eight-channel noise-band vocoded) and non-CI-simulated English sentences differing in prosodically marked focus. For assessing perceptual accuracy, listeners had to indicate which of four possible context questions the speaker answered. For assessing listening effort, a dual-task paradigm was used with a secondary free recall task. RESULTS The results indicated that prosodic focus interpretation was significantly less accurate in the CI-simulated condition compared with the non-CI-simulated condition but that listening effort was not increased. Moreover, there was no interaction between the influence of the degraded CI-simulated speech signal and listening groups in either their perceptual accuracy or listening effort. CONCLUSION Non-native listeners are not more strongly affected by spectrotemporal degradations than native listeners, and less proficient non-native listeners are not more strongly affected by these degradations than more proficient non-native listeners.
Collapse
Affiliation(s)
- Marita K Everhardt
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
- Department of Psychology, University of Groningen, the Netherlands
| | - Matt Coler
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
- Campus Fryslân, University of Groningen, the Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, the Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, the Netherlands
| |
Collapse
|
3
|
Bissmeyer SRS, Goldsworthy RL. Combining Place and Rate of Stimulation Improves Frequency Discrimination in Cochlear Implant Users. Hear Res 2022; 424:108583. [PMID: 35930901 PMCID: PMC10849775 DOI: 10.1016/j.heares.2022.108583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Revised: 06/20/2022] [Accepted: 07/21/2022] [Indexed: 11/04/2022]
Abstract
In the auditory system, frequency is represented as tonotopic and temporal response properties of the auditory nerve. While these response properties are inextricably linked in normal hearing, cochlear implants can separately excite tonotopic location and temporal synchrony using different electrodes and stimulation rates, respectively. This separation allows for the investigation of the contributions of tonotopic and temporal cues for frequency discrimination. The present study examines frequency discrimination in adult cochlear implant users as conveyed by electrode position and stimulation rate, separately and combined. The working hypothesis is that frequency discrimination is better provided by place and rate cues combined compared to either cue alone. This hypothesis was tested in two experiments. In the first experiment, frequency discrimination needed for melodic contour identification was measured for frequencies near 100, 200, and 400 Hz using frequency allocation modeled after clinical processors. In the second experiment, frequency discrimination for pitch ranking was measured for frequencies between 100 and 1600 Hz using an experimental frequency allocation designed to provide better access to place cues. The results of both experiments indicate that frequency discrimination is better with place and rate cues combined than with either cue alone. These results clarify how signal processing for cochlear implants could better encode frequency into place and rate of electrical stimulation. Further, the results provide insight into the contributions of place and rate cues for pitch.
Collapse
Affiliation(s)
- Susan R S Bissmeyer
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States; Auditory Research Center, Health Research Association, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, 1640 Marengo Street Suite 326, Los Angeles, CA 90033, United States.
| | - Raymond L Goldsworthy
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States; Auditory Research Center, Health Research Association, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, 1640 Marengo Street Suite 326, Los Angeles, CA 90033, United States
| |
Collapse
|
4
|
Bissmeyer SRS, Ortiz JR, Gan H, Goldsworthy RL. Computer-based musical interval training program for Cochlear implant users and listeners with no known hearing loss. Front Neurosci 2022; 16:903924. [PMID: 35968373 PMCID: PMC9363605 DOI: 10.3389/fnins.2022.903924] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/11/2022] [Indexed: 11/15/2022] Open
Abstract
A musical interval is the difference in pitch between two sounds. The way that musical intervals are used in melodies relative to the tonal center of a key can strongly affect the emotion conveyed by the melody. The present study examines musical interval identification in people with no known hearing loss and in cochlear implant users. Pitch resolution varies widely among cochlear implant users with average resolution an order of magnitude worse than in normal hearing. The present study considers the effect of training on musical interval identification and tests for correlations between low-level psychophysics and higher-level musical abilities. The overarching hypothesis is that cochlear implant users are limited in their ability to identify musical intervals both by low-level access to frequency cues for pitch as well as higher-level mapping of the novel encoding of pitch that implants provide. Participants completed a 2-week, online interval identification training. The benchmark tests considered before and after interval identification training were pure tone detection thresholds, pure tone frequency discrimination, fundamental frequency discrimination, tonal and rhythm comparisons, and interval identification. The results indicate strong correlations between measures of pitch resolution with interval identification; however, only a small effect of training on interval identification was observed for the cochlear implant users. Discussion focuses on improving access to pitch cues for cochlear implant users and on improving auditory training for musical intervals.
Collapse
Affiliation(s)
- Susan Rebekah Subrahmanyam Bissmeyer
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
- *Correspondence: Susan Rebekah Subrahmanyam Bissmeyer,
| | - Jacqueline Rose Ortiz
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Helena Gan
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Raymond Lee Goldsworthy
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
5
|
Cabrera L, Lau BK. The development of auditory temporal processing during the first year of life. HEARING, BALANCE AND COMMUNICATION 2022; 20:155-165. [PMID: 36111124 PMCID: PMC9473293 DOI: 10.1080/21695717.2022.2029092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
OBJECTIVES The processing of auditory temporal information is important for the extraction of voice pitch, linguistic information, as well as the overall temporal structure of speech. However, many aspects of its early development remain poorly understood. This paper reviews the development of auditory temporal processing during the first year of life when infants are acquiring their native language. METHODS First, potential mechanisms of neural immaturity are discussed in the context of neurophysiological studies. Next, what is known about infant auditory capabilities is considered with a focus on psychophysical studies involving non-speech stimuli to investigate the perception of temporal fine structure and envelope cues. This is followed by a review of studies involving speech stimuli, including those that present vocoded signals as a method of degrading the spectro-temporal information available to infant listeners. RESULTS/CONCLUSION This review suggests that temporal resolution may be well developed in the first postnatal months, but that the ability to use and process the temporal information in an efficient way along the entire auditory pathway is longer to develop. Those findings have crucial implications for the development of language abilities, especially for infants with hearing impairment who are using cochlear implants.
Collapse
Affiliation(s)
- Laurianne Cabrera
- Université de Paris, INCC UMR 8002, CNRS, 45 rue des saints-pères, F-75006 Paris, France
| | - Bonnie K Lau
- Department of Otolaryngology - Head & Neck Surgery, University of Washington, 1701 NE Columbia Rd, Box 257923, Seattle, WA 98195
| |
Collapse
|
6
|
Agarwal A, Tan X, Xu Y, Richter CP. Channel Interaction During Infrared Light Stimulation in the Cochlea. Lasers Surg Med 2021; 53:986-997. [PMID: 33476051 DOI: 10.1002/lsm.23360] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Revised: 10/21/2020] [Accepted: 11/07/2020] [Indexed: 12/22/2022]
Abstract
BACKGROUND AND OBJECTIVES The number of perceptually independent channels to encode acoustic information is limited in contemporary cochlear implants (CIs) because of the current spread in the tissue. It has been suggested that neighboring electrodes have to be separated in humans by a distance of more than 2 mm to eliminate significant overlap of the electric current fields and subsequent interaction between the channels. It has also been argued that an increase in the number of independent channels could improve CI user performance in challenging listening environments, such as speech in noise, tonal languages, or music perception. Optical stimulation has been suggested as an alternative modality for neural stimulation because it is spatially selective. This study reports the results of experiments designed to quantify the interaction between neighboring optical sources in the cochlea during stimulation with infrared radiation. STUDY DESIGN/MATERIALS AND METHODS In seven adult albino guinea pigs, a forward masking method was used to quantify the interaction between two neighboring optical sources during stimulation. Two optical fibers were placed through cochleostomies into the scala tympani of the basal cochlear turn. The radiation beams were directed towards different neuron populations along the spiral ganglion. Optically evoked compound action potentials were recorded for different radiant energies and distances between the optical fibers. The outcome measure was the radiant energy of a masker pulse delivered 3 milliseconds before a probe pulse to reduce the response evoked by the probe pulse by 3 dB. Results were compared for different distances between the fibers placed along the cochlea. RESULTS The energy required to reduce the probe's response by 3 dB increased by 20.4 dB/mm and by 26.0 dB/octave. The inhibition was symmetrical for the masker placed basal to the probe (base-to-apex) and the masker placed apical to the probe (apex-to-base). CONCLUSION The interaction between neighboring optical sources during infrared laser stimulation is less than the interaction between neighboring electrical contacts during electrical stimulation. Previously published data for electrical stimulation reported an average current spread in human and cat cochleae of 2.8 dB/mm. With the increased number of independent channels for optical stimulation, it is anticipated that speech and music performance will improve. Lasers Surg. Med. © 2020 Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Aditi Agarwal
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Xiaodong Tan
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Yingyue Xu
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611
| | - Claus-Peter Richter
- Department of Otolaryngology, Feinberg School of Medicine, Northwestern University, 320 E. Superior Street, Searle 12-561, Chicago, Illinois, 60611.,Department of Biomedical Engineering, Northwestern University, 2145 Sheridan Road, Tech E310, Evanston, Illinois, 60208.,Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois, 60208.,Department of Communication Sciences and Disorders, The Hugh Knowles Center, Northwestern University, Evanston, Illinois, 60208
| |
Collapse
|
7
|
Meng Q, Hegner YL, Giblin I, McMahon C, Johnson BW. Lateralized Cerebral Processing of Abstract Linguistic Structure in Clear and Degraded Speech. Cereb Cortex 2021; 31:591-602. [PMID: 32901245 DOI: 10.1093/cercor/bhaa245] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2020] [Revised: 08/05/2020] [Accepted: 08/06/2020] [Indexed: 11/12/2022] Open
Abstract
Human cortical activity measured with magnetoencephalography (MEG) has been shown to track the temporal regularity of linguistic information in connected speech. In the current study, we investigate the underlying neural sources of these responses and test the hypothesis that they can be directly modulated by changes in speech intelligibility. MEG responses were measured to natural and spectrally degraded (noise-vocoded) speech in 19 normal hearing participants. Results showed that cortical coherence to "abstract" linguistic units with no accompanying acoustic cues (phrases and sentences) were lateralized to the left hemisphere and changed parametrically with intelligibility of speech. In contrast, responses coherent to words/syllables accompanied by acoustic onsets were bilateral and insensitive to intelligibility changes. This dissociation suggests that cerebral responses to linguistic information are directly affected by intelligibility but also powerfully shaped by physical cues in speech. This explains why previous studies have reported widely inconsistent effects of speech intelligibility on cortical entrainment and, within a single experiment, provided clear support for conclusions about language lateralization derived from a large number of separately conducted neuroimaging studies. Since noise-vocoded speech resembles the signals provided by a cochlear implant device, the current methodology has potential clinical utility for assessment of cochlear implant performance.
Collapse
Affiliation(s)
- Qingqing Meng
- The HEARing CRC, Audiology, Hearing and Speech Sciences, University of Melbourne, Melbourne, Victoria 3053, Australia.,Department of Cognitive Science, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Yiwen Li Hegner
- The HEARing CRC, Audiology, Hearing and Speech Sciences, University of Melbourne, Melbourne, Victoria 3053, Australia.,Department of Linguistics, Macquarie University, Sydney, New South Wales 2109, Australia.,MEG-Center, University of Tübingen, Tübingen 72074, Germany
| | - Iain Giblin
- Department of Linguistics, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Catherine McMahon
- The HEARing CRC, Audiology, Hearing and Speech Sciences, University of Melbourne, Melbourne, Victoria 3053, Australia.,Department of Linguistics, Macquarie University, Sydney, New South Wales 2109, Australia.,H:EAR Centre, Macquarie University, New South Wales 2109, Australia
| | - Blake W Johnson
- The HEARing CRC, Audiology, Hearing and Speech Sciences, University of Melbourne, Melbourne, Victoria 3053, Australia.,Department of Cognitive Science, Macquarie University, Sydney, New South Wales 2109, Australia
| |
Collapse
|
8
|
Bissmeyer SRS, Hossain S, Goldsworthy RL. Perceptual learning of pitch provided by cochlear implant stimulation rate. PLoS One 2020; 15:e0242842. [PMID: 33270735 PMCID: PMC7714175 DOI: 10.1371/journal.pone.0242842] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2019] [Accepted: 11/10/2020] [Indexed: 11/19/2022] Open
Abstract
Cochlear implant users hear pitch evoked by stimulation rate, but discrimination diminishes for rates above 300 Hz. This upper limit on rate pitch is surprising given the remarkable and specialized ability of the auditory nerve to respond synchronously to stimulation rates at least as high as 3 kHz and arguably as high as 10 kHz. Sensitivity to stimulation rate as a pitch cue varies widely across cochlear implant users and can be improved with training. The present study examines individual differences and perceptual learning of stimulation rate as a cue for pitch ranking. Adult cochlear implant users participated in electrode psychophysics that involved testing once per week for three weeks. Stimulation pulse rate discrimination was measured in bipolar and monopolar configurations for apical and basal electrodes. Base stimulation rates between 100 and 800 Hz were examined. Individual differences were quantified using psychophysically derived metrics of spatial tuning and temporal integration. This study examined distribution of measures across subjects, predictive power of psychophysically derived metrics of spatial tuning and temporal integration, and the effect of training on rate discrimination thresholds. Psychophysical metrics of spatial tuning and temporal integration were not predictive of stimulation rate discrimination, but discrimination thresholds improved at lower frequencies with training. Since most clinical devices do not use variable stimulation rates, it is unknown to what extent recipients may learn to use stimulation rate cues if provided in a clear and consistent manner.
Collapse
Affiliation(s)
- Susan R. S. Bissmeyer
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America
| | - Shaikat Hossain
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America
| | - Raymond L. Goldsworthy
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, California, United States of America
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, California, United States of America
| |
Collapse
|
9
|
Casaponsa A, Sohoglu E, Moore DR, Füllgrabe C, Molloy K, Amitay S. Does training with amplitude modulated tones affect tone-vocoded speech perception? PLoS One 2019; 14:e0226288. [PMID: 31881550 PMCID: PMC6934405 DOI: 10.1371/journal.pone.0226288] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 11/22/2019] [Indexed: 11/17/2022] Open
Abstract
Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored.
Collapse
Affiliation(s)
- Aina Casaponsa
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
- Department of Linguistics and English Language, Lancaster University, Lancaster, England, United Kingdom
| | - Ediz Sohoglu
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - David R. Moore
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Christian Füllgrabe
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Katharine Molloy
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Sygal Amitay
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| |
Collapse
|
10
|
Kral A, Dorman MF, Wilson BS. Neuronal Development of Hearing and Language: Cochlear Implants and Critical Periods. Annu Rev Neurosci 2019; 42:47-65. [DOI: 10.1146/annurev-neuro-080317-061513] [Citation(s) in RCA: 71] [Impact Index Per Article: 14.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The modern cochlear implant (CI) is the most successful neural prosthesis developed to date. CIs provide hearing to the profoundly hearing impaired and allow the acquisition of spoken language in children born deaf. Results from studies enabled by the CI have provided new insights into ( a) minimal representations at the periphery for speech reception, ( b) brain mechanisms for decoding speech presented in quiet and in acoustically adverse conditions, ( c) the developmental neuroscience of language and hearing, and ( d) the mechanisms and time courses of intramodal and cross-modal plasticity. Additionally, the results have underscored the interconnectedness of brain functions and the importance of top-down processes in perception and learning. The findings are described in this review with emphasis on the developing brain and the acquisition of hearing and spoken language.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, ENT Clinics, Hannover Medical University, 30625 Hannover, Germany
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Health Sciences, Macquarie University, Sydney, New South Wales 2109, Australia
| | - Michael F. Dorman
- Department of Speech and Hearing Science, Arizona State University, Tempe, Arizona 85287, USA
| | - Blake S. Wilson
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, Dallas, Texas 75080, USA
- School of Medicine and Pratt School of Engineering, Duke University, Durham, North Carolina 27708, USA
| |
Collapse
|
11
|
Blankenship KG, Ohde RN, Won JH, Hedrick M. Speech perception in children with cochlear implants for continua varying in formant transition duration. INTERNATIONAL JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2018; 20:238-246. [PMID: 28000516 DOI: 10.1080/17549507.2016.1265589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/22/2016] [Accepted: 11/21/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE To examine the developmental course of labial and alveolar manner of articulation contrasts, and to determine how that course may be different for typically developing (TD) children with cochlear implants (CI). METHOD Eight young adults, eight TD 5-8 year-old children, and seven 5-8 year-old children with CIs participated. Labial /ba/-/wa/ and alveolar /da/-/ja/ continua stimuli were presented, with each continuum consisting of nine synthetic stimuli varying in F2 and F3 transition duration. Participants were asked to label the stimuli as either a stop or glide, and responses were analysed for phonetic boundaries and slopes. RESULT For the /ba/-/wa/ contrast, children with CIs required longer transition durations compared to TD children or adults to cross from one phoneme category to another. The children with CIs demonstrated less confidence in labelling the stimuli (i.e. less steep slopes) than the TD children or the adults. For the /da/-/ja/ contrast, the children with CIs showed less steep slope values than adults. CONCLUSION These results suggest that there are differences in the way TD children and children with CIs develop and maintain phonetic categories, perhaps differences in phonetic representation or in linking acoustic and phonetic representations.
Collapse
Affiliation(s)
| | - Ralph N Ohde
- b Department of Hearing and Speech Sciences, Vanderbilt University , Nashville , TN , USA , and
| | - Jong Ho Won
- c Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, TN, USA
| | - Mark Hedrick
- c Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, TN, USA
| |
Collapse
|
12
|
Zheng Y, Koehnke J, Besing J. Combined Effects of Noise and Reverberation on Sound Localization for Listeners With Normal Hearing and Bilateral Cochlear Implants. Am J Audiol 2017; 26:519-530. [PMID: 29071340 DOI: 10.1044/2017_aja-16-0101] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2016] [Accepted: 07/12/2017] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study examined the individual and combined effects of noise and reverberation on the ability of listeners with normal hearing (NH) and with bilateral cochlear implants (BCIs) to localize speech. METHOD Six adults with BCIs and 10 with NH participated. All subjects completed a virtual localization test in quiet and at 0-, -4-, and -8-dB signal-to-noise ratios (SNRs) in simulated anechoic and reverberant (0.2-, 0.6-, and 0.9-s RT60) environments. BCI users were also tested at +8- and +4-dB SNR. A 3-word phrase was presented at 70 dB SPL from 9 simulated locations in the frontal horizontal plane (±90°), with the noise source at 0°. RESULTS BCIs users had significantly poorer localization than listeners with NH in all conditions. BCI users' performance started to decrease at a higher SNR (+4 dB) and shorter RT60 (0.2 s) than listeners with NH (-4 dB and 0.6 s). The combination of noise and reverberation began to degrade localization of BCI users at a higher SNR and a shorter RT60 than listeners with NH. CONCLUSION The clear effect of noise and reverberation on the performance of BCI users provides information that should be useful for refining cochlear implant processing strategies and developing cochlear implant rehabilitation plans to optimize binaural benefit for BCI users in everyday listening situations.
Collapse
Affiliation(s)
- Yunfang Zheng
- Department of Communication Sciences and Disorders, Central Michigan University, Mount Pleasant, MI
| | - Janet Koehnke
- Department of Communication Sciences and Disorders, Montclair State University, Bloomfield, NJ
| | - Joan Besing
- Department of Communication Sciences and Disorders, Montclair State University, Bloomfield, NJ
| |
Collapse
|
13
|
Li F, Bunta F, Tomblin JB. Alveolar and Postalveolar Voiceless Fricative and Affricate Productions of Spanish-English Bilingual Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:2427-2441. [PMID: 28800372 PMCID: PMC5831615 DOI: 10.1044/2017_jslhr-s-16-0125] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Revised: 12/19/2016] [Accepted: 04/04/2017] [Indexed: 05/18/2023]
Abstract
PURPOSE This study investigates the production of voiceless alveolar and postalveolar fricatives and affricates by bilingual and monolingual children with hearing loss who use cochlear implants (CIs) and their peers with normal hearing (NH). METHOD Fifty-four children participated in our study, including 12 Spanish-English bilingual CI users (M = 6;0 [years;months]), 12 monolingual English-speaking children with CIs (M = 6;1), 20 bilingual children with NH (M = 6;5), and 10 monolingual English-speaking children with NH (M = 5;10). Picture elicitation targeting /s/, /tʃ/, and /ʃ/ was administered. Repeated-measures analyses of variance comparing group means for frication duration, rise time, and centroid frequency were conducted for the effects of CI use and bilingualism. RESULTS All groups distinguished the target sounds in the 3 acoustic parameters examined. Regarding frication duration and rise time, the Spanish productions of bilingual children with CIs differed from their bilingual peers with NH. English frication duration patterns for bilingual versus monolingual CI users also differed. Centroid frequency was a stronger place cue for children with NH than for children with CIs. CONCLUSION Patterns of fricative and affricate production display effects of bilingualism and diminished signal, yielding unique patterns for bilingual and monolingual CI users.
Collapse
Affiliation(s)
- Fangfang Li
- Department of Psychology, University of Lethbridge, Alberta, Canada
| | - Ferenc Bunta
- Department of Communication Sciences and Disorders, University of Houston, TX
| | - J. Bruce Tomblin
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City
| |
Collapse
|
14
|
Han MK, Storkel HL, Lee J, Cox C. The Effects of Phonotactic Probability and Neighborhood Density on Adults' Word Learning in Noisy Conditions. AMERICAN JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2016; 25:547-560. [PMID: 27788276 PMCID: PMC5373694 DOI: 10.1044/2016_ajslp-14-0165] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2014] [Revised: 04/09/2015] [Accepted: 03/24/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE Noisy conditions make auditory processing difficult. This study explores whether noisy conditions influence the effects of phonotactic probability (the likelihood of occurrence of a sound sequence) and neighborhood density (phonological similarity among words) on adults' word learning. METHOD Fifty-eight adults learned nonwords varying in phonotactic probability and neighborhood density in either an unfavorable (0-dB signal-to-noise ratio [SNR]) or a favorable (+8-dB SNR) listening condition. Word learning was assessed using a picture naming task by scoring the proportion of phonemes named correctly. RESULTS The unfavorable 0-dB SNR condition showed a significant interaction between phonotactic probability and neighborhood density in the absence of main effects. In particular, adults learned more words when phonotactic probability and neighborhood density were both low or both high. The +8-dB SNR condition did not show this interaction. These results are inconsistent with those from a prior adult word learning study conducted under quiet listening conditions that showed main effects of word characteristics. CONCLUSIONS As the listening condition worsens, adult word learning benefits from a convergence of phonotactic probability and neighborhood density. Clinical implications are discussed for potential populations who experience difficulty with auditory perception or processing, making them more vulnerable to noise.
Collapse
|
15
|
Patro C, Mendel LL. Role of contextual cues on the perception of spectrally reduced interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1336. [PMID: 27586760 DOI: 10.1121/1.4961450] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Collapse
Affiliation(s)
- Chhayakanta Patro
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| |
Collapse
|
16
|
Han MK, Storkel HL, Lee J, Yoshinaga-Itano C. The influence of word characteristics on the vocabulary of children with cochlear implants. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2015; 20:242-51. [PMID: 25802320 PMCID: PMC6390414 DOI: 10.1093/deafed/env006] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/03/2014] [Accepted: 02/15/2015] [Indexed: 05/14/2023]
Abstract
The goal of this study was to explore the effects of phonotactic probability, word length, word frequency, and neighborhood density on the words known by children with cochlear implants (CIs) varying in vocabulary outcomes in a retrospective analysis of a subset of data from a longitudinal study of hearing loss. Generalized linear mixed modeling was used to examine the effects of these word characteristics at 3 time points: preimplant, postimplant, and longitudinal follow-up. Results showed a robust effect of neighborhood density across group and time, whereas the effect of frequency varied by time. Significant effects of phonotactic probability or word length were not detected. Taken together, these findings suggest that children with CIs may be able to use spoken language structure in a manner similar to their normal hearing counterparts, despite the differences in the quality of the input. The differences in the effects of phonotactic probability and word length imply a difficulty in initiating word learning and limited working memory ability in children with CIs.
Collapse
|
17
|
Looi V, She J. Music perception of cochlear implant users: A questionnaire, and its implications for a music training program. Int J Audiol 2010; 49:116-28. [DOI: 10.3109/14992020903405987] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
18
|
Garadat SN, Litovsky RY, Yu G, Zeng FG. Effects of simulated spectral holes on speech intelligibility and spatial release from masking under binaural and monaural listening. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:977-89. [PMID: 20136220 PMCID: PMC2830263 DOI: 10.1121/1.3273897] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/08/2008] [Revised: 09/20/2009] [Accepted: 11/22/2009] [Indexed: 05/25/2023]
Abstract
The possibility that "dead regions" or "spectral holes" can account for some differences in performance between bilateral cochlear implant (CI) users and normal-hearing listeners was explored. Using a 20-band noise-excited vocoder to simulate CI processing, this study examined effects of spectral holes on speech reception thresholds (SRTs) and spatial release from masking (SRM) in difficult listening conditions. Prior to processing, stimuli were convolved through head-related transfer-functions to provide listeners with free-field directional cues. Processed stimuli were presented over headphones under binaural or monaural (right ear) conditions. Using Greenwood's [(1990). J. Acoust. Soc. Am. 87, 2592-2605] frequency-position function and assuming a cochlear length of 35 mm, spectral holes were created for variable sizes (6 and 10 mm) and locations (base, middle, and apex). Results show that middle-frequency spectral holes were the most disruptive to SRTs, whereas high-frequency spectral holes were the most disruptive to SRM. Spectral holes generally reduced binaural advantages in difficult listening conditions. These results suggest the importance of measuring dead regions in CI users. It is possible that customized programming for bilateral CI processors based on knowledge about dead regions can enhance performance in adverse listening situations.
Collapse
Affiliation(s)
- Soha N Garadat
- Waisman Center, University of Wisconsin, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | | | | | | |
Collapse
|
19
|
Macherey O, van Wieringen A, Carlyon RP, Dhooge I, Wouters J. Forward-masking patterns produced by symmetric and asymmetric pulse shapes in electric hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:326-38. [PMID: 20058980 PMCID: PMC3000474 DOI: 10.1121/1.3257231] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
Two forward-masking experiments were conducted with six cochlear implant listeners to test whether asymmetric pulse shapes would improve the place-specificity of stimulation compared to symmetric ones. The maskers were either cathodic-first symmetric biphasic, pseudomonophasic (i.e., with a second anodic phase longer and lower in amplitude than the first phase), or "delayed pseudomonophasic" (identical to pseudomonophasic but with an inter-phase gap) stimuli. In experiment 1, forward-masking patterns for monopolar maskers were obtained by keeping each masker fixed on a middle electrode of the array and measuring the masked thresholds of a monopolar signal presented on several other electrodes. The results were very variable, and no difference between pulse shapes was found. In experiment 2, six maskers were used in a wide bipolar (bipolar+9) configuration: the same three pulse shapes as in experiment 1, either cathodic-first relative to the most apical or relative to the most basal electrode of the bipolar channel. The pseudomonophasic masker showed a stronger excitation proximal to the electrode of the bipolar pair for which the short, high-amplitude phase was anodic. However, no difference was obtained with the symmetric and, more surprisingly, with the delayed pseudomonophasic maskers. Implications for cochlear implant design are discussed.
Collapse
Affiliation(s)
- Olivier Macherey
- ExpORL, Department of Neurosciences, KU Leuven, Herestraat 49, Bus 721, 3000 Leuven, Belgium.
| | | | | | | | | |
Collapse
|
20
|
Trehub SE, Vongpaisal T, Nakata T. Music in the lives of deaf children with cochlear implants. Ann N Y Acad Sci 2009; 1169:534-42. [PMID: 19673836 DOI: 10.1111/j.1749-6632.2009.04554.x] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Present-day cochlear implants provide good temporal cues and coarse spectral cues. In general, these cues are adequate for perceiving speech in quiet backgrounds and for young children's acquisition of spoken language. They are inadequate, however, for conveying the rich pitch-patterning of music. As a result, many adults who become implant users after losing their hearing find music disappointing or unacceptable. By contrast, child implant users who were born deaf or became deaf as infants or toddlers typically find music interesting and enjoyable. They recognize popular songs that they hear regularly when the test materials match critical features of the original versions. For example, they can identify familiar songs from the original recordings with words and from versions that omit the words but preserve all other cues. They also recognize theme songs from their favorite television programs when presented in original or somewhat altered form. The motivation of children with implants for listening to music or melodious speech is evident well before they understand language. Within months after receiving their implant, they prefer singing to silence. They also prefer speech in the maternal style to typical adult speech and the sounds of their native language-to-be to those of a foreign language. An important task of future research is to ascertain the relative contributions of perceptual and motivational factors to the apparent differences between child and adult implant users.
Collapse
|
21
|
Noble W, Tyler R, Dunn C, Bhullar N. Unilateral and bilateral cochlear implants and the implant-plus-hearing-aid profile: comparing self-assessed and measured abilities. Int J Audiol 2008; 47:505-14. [PMID: 18608531 DOI: 10.1080/14992020802070770] [Citation(s) in RCA: 68] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Patients fitted with one (CI) versus two (CI+CI) cochlear implants, and those fitted with one implant who retain a hearing aid in the non-implanted ear (CI+HA), were compared using the speech, spatial, and qualities of hearing scale (SSQ) (Gatehouse & Noble, 2004). The CI+CI profile yielded significantly higher ability ratings than the CI profile in the spatial hearing domain, and on most aspects of other qualities of hearing (segregation, naturalness, and listening effort). A subset of patients completed the SSQ prior to implantation, and the CI+CI profile showed consistently greater improvement than the CI profile across all domains. Patients in the CI+HA group self-rated no differently from the CI group, post-implant. Measured speech perception and localization performance showed some parallels with the self-rating outcomes. Overall, a unilateral CI provided significant benefit across most hearing functions reflected in the SSQ. Bilateral implantation offered further benefit across a substantial range of those functions.
Collapse
|
22
|
Tyler RS, Noble W, Dunn C, Witt S. Some benefits and limitations of binaural cochlear implants and our ability to measure them. Int J Audiol 2007; 45 Suppl 1:S113-9. [PMID: 16938783 DOI: 10.1080/14992020600783095] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
We review new recognition and localization skills in patients using one or two cochlear implant(s). We observed one unilateral patient who showed localization performance above chance. We also provide evidence for binaural processing in bilateral cochlear implant patients, even when tested with speech from the front without noise. We unsuccessfully attempted to find correlations between localization and squelch, between these variables and pre-implant threshold differences, or these variables and post-implant recognition differences. We strongly believe that new tests are needed to examine the potential benefit of two implants. We describe three tests that we use to show a binaural advantage: cued recognition, movement direction, and recognition with multiple jammers.
Collapse
Affiliation(s)
- Richard S Tyler
- Dept of Otolaryngology-Head and Neck Surgery and Speech Pathology and Audiology, University of Iowa, Iowa City, Iowa, USA.
| | | | | | | |
Collapse
|