1
|
Meehan S, Adank ML, van der Schroeff MP, Vroegop JL. A systematic review of acoustic change complex (ACC) measurements and applicability in children for the assessment of the neural capacity for sound and speech discrimination. Hear Res 2024; 451:109090. [PMID: 39047579 DOI: 10.1016/j.heares.2024.109090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/11/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
OBJECTIVE The acoustic change complex (ACC) is a cortical auditory evoked potential (CAEP) and can be elicited by a change in an otherwise continuous sound. The ACC has been highlighted as a promising tool in the assessment of sound and speech discrimination capacity, and particularly for difficult-to-test populations such as infants with hearing loss, due to the objective nature of ACC measurements. Indeed, there is a pressing need to develop further means to accurately and thoroughly establish the hearing status of children with hearing loss, to help guide hearing interventions in a timely manner. Despite the potential of the ACC method, ACC measurements remain relatively rare in a standard clinical settings. The objective of this study was to perform an up-to-date systematic review on ACC measurements in children, to provide greater clarity and consensus on the possible methodologies, applications, and performance of this technique, and to facilitate its uptake in relevant clinical settings. DESIGN Original peer-reviewed articles conducting ACC measurements in children (< 18 years). Data were extracted and summarised for: (1) participant characteristics; (2) ACC methods and auditory stimuli; (3) information related to the performance of the ACC technique; (4) ACC measurement outcomes, advantages, and challenges. The systematic review was conducted using PRISMA guidelines for reporting and the methodological quality of included articles was assessed. RESULTS A total of 28 studies were identified (9 infant studies). Review results show that ACC responses can be measured in infants (from < 3 months), and there is evidence of age-dependency, including increased robustness of the ACC response with increasing childhood age. Clinical applications include the measurement of the neural capacity for speech and non-speech sound discrimination in children with hearing loss, auditory neuropathy spectrum disorder (ANSD) and central auditory processing disorder (CAPD). Additionally, ACCs can be recorded in children with hearing aids, auditory brainstem implants, and cochlear implants, and ACC results may guide hearing intervention/rehabilitation strategies. The review identified that the time taken to perform ACC measurements was often lengthy; the development of more efficient ACC test procedures for children would be beneficial. Comparisons between objective ACC measurements and behavioural measures of sound discrimination showed significant correlations for some, but not all, included studies. CONCLUSIONS ACC measurements of the neural capacity to discriminate between speech and non-speech sounds are feasible in infants and children, and a wide range of possible clinical applications exist, although more time-efficient procedures would be advantageous for clinical uptake. A consideration of age and maturational effects is recommended, and further research is required to investigate the relationship between objective ACC measures and behavioural measures of sound and speech perception for effective clinical implementation.
Collapse
Affiliation(s)
- Sarah Meehan
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands.
| | - Marloes L Adank
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Marc P van der Schroeff
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Jantien L Vroegop
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
2
|
Nora A, Rinkinen O, Renvall H, Service E, Arkkila E, Smolander S, Laasonen M, Salmelin R. Impaired Cortical Tracking of Speech in Children with Developmental Language Disorder. J Neurosci 2024; 44:e2048232024. [PMID: 38589232 PMCID: PMC11140678 DOI: 10.1523/jneurosci.2048-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Revised: 03/25/2024] [Accepted: 03/26/2024] [Indexed: 04/10/2024] Open
Abstract
In developmental language disorder (DLD), learning to comprehend and express oneself with spoken language is impaired, but the reason for this remains unknown. Using millisecond-scale magnetoencephalography recordings combined with machine learning models, we investigated whether the possible neural basis of this disruption lies in poor cortical tracking of speech. The stimuli were common spoken Finnish words (e.g., dog, car, hammer) and sounds with corresponding meanings (e.g., dog bark, car engine, hammering). In both children with DLD (10 boys and 7 girls) and typically developing (TD) control children (14 boys and 3 girls), aged 10-15 years, the cortical activation to spoken words was best modeled as time-locked to the unfolding speech input at ∼100 ms latency between sound and cortical activation. Amplitude envelope (amplitude changes) and spectrogram (detailed time-varying spectral content) of the spoken words, but not other sounds, were very successfully decoded based on time-locked brain responses in bilateral temporal areas; based on the cortical responses, the models could tell at ∼75-85% accuracy which of the two sounds had been presented to the participant. However, the cortical representation of the amplitude envelope information was poorer in children with DLD compared with TD children at longer latencies (at ∼200-300 ms lag). We interpret this effect as reflecting poorer retention of acoustic-phonetic information in short-term memory. This impaired tracking could potentially affect the processing and learning of words as well as continuous speech. The present results offer an explanation for the problems in language comprehension and acquisition in DLD.
Collapse
Affiliation(s)
- Anni Nora
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Oona Rinkinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| | - Hanna Renvall
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
- BioMag Laboratory, HUS Diagnostic Center, Helsinki University Hospital, Helsinki FI-00029, Finland
| | - Elisabet Service
- Department of Linguistics and Languages, Centre for Advanced Research in Experimental and Applied Linguistics (ARiEAL), McMaster University, Hamilton, Ontario L8S 4L8, Canada
- Department of Psychology and Logopedics, University of Helsinki, Helsinki FI-00014, Finland
| | - Eva Arkkila
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
| | - Sini Smolander
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Research Unit of Logopedics, University of Oulu, Oulu FI-90014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Marja Laasonen
- Department of Otorhinolaryngology and Phoniatrics, Head and Neck Center, Helsinki University Hospital and University of Helsinki, Helsinki FI-00014, Finland
- Department of Logopedics, University of Eastern Finland, Joensuu FI-80101, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo FI-00076, Finland
- Aalto NeuroImaging (ANI), Aalto University, Espoo FI-00076, Finland
| |
Collapse
|
3
|
Maslin MRD, Wise KJ, Purdy SC. The mismatch response in normal hearing adults: a performance comparison with stimuli relevant for objective validation of hearing aid fittings. Int J Audiol 2023; 62:1084-1094. [PMID: 36628549 DOI: 10.1080/14992027.2022.2142682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2022] [Accepted: 10/17/2022] [Indexed: 01/12/2023]
Abstract
OBJECTIVE A long-standing observation is that the Mismatch Response (MMR) has the potential to offer a clinically feasible index of sound discrimination. However, findings that positively identify MMRs at the individual level have been mixed, even for those who are normally hearing and who can discriminate sounds behaviourally. This complicates interpretation when an MMR is not observed. The objective of this study was to determine the reliability of the MMR using an optimised paradigm and a range of stimuli relevant to audiological applications in relation to objective verification of hearing aid fittings. DESIGN MMRs were measured using an optimised 3-deviant paradigm in response to a range of sounds designed for aided and unaided sound field assessments, including complex tones (CTs) and speech-like signals. STUDY SAMPLE Seventeen normally hearing adults (18-56 years). RESULTS The most robust MMRs were recorded in response to CTs; responses were positively identified in 50 out of 51 instances (98%), assessed via objective Hotelling's T2 bias-free statistical analyses. CONCLUSIONS The results indicate that CTs in conjunction with optimised recording and analysis parameters offer the potential to elicit robust MMRs, supporting future utilisation of MMRs for clinical audiological applications.
Collapse
Affiliation(s)
- Michael R D Maslin
- School of Psychology, Speech and Hearing, The University of Canterbury, New Zealand
- Eisdell Moore Centre for Hearing and Balance Research, New Zealand
| | - Kim J Wise
- Eisdell Moore Centre for Hearing and Balance Research, New Zealand
- School of Psychology, Speech Science, The University of Auckland, New Zealand
| | - Suzanne C Purdy
- Eisdell Moore Centre for Hearing and Balance Research, New Zealand
- School of Psychology, Speech Science, The University of Auckland, New Zealand
| |
Collapse
|
4
|
Sanju HK, Jain T, Kumar P. Acoustic Change Complex as a Neurophysiological Tool to Assess Auditory Discrimination Skill: A Review. Int Arch Otorhinolaryngol 2023; 27:e362-e369. [PMID: 37125361 PMCID: PMC10147461 DOI: 10.1055/s-0042-1743202] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 05/17/2021] [Indexed: 03/22/2023] Open
Abstract
Abstract
Introduction Acoustic change complex (ACC) is a type of event-related potential evoked in response to subtle change(s) in the continuing stimuli. In the presence of a growing number of investigations on ACC, there is a need to review the various methodologies, findings, clinical utilities, and conclusions of different studies by authors who have studied ACC.
Objective The present review article is focused on the literature related to the utility of ACC as a tool to assess the auditory discrimination skill in different populations.
Data Synthesis Various database providers, such as Medline, Pubmed, Google, and Google Scholar, were searched for any ACC-related reference. A total of 102 research papers were initially obtained using descriptors such as acoustic change complex, clinical utility of ACC, ACC in children, ACC in cochlear implant users, and ACC in hearing loss. The titles, authors, and year of publication were examined, and the duplicates were eliminated. A total of 31 research papers were found on ACC and were incorporated in the present review. The findings of these 31 articles were reviewed and have been reported in the present article.
Conclusion The present review showed the utility of ACC as an objective tool to support various subjective tests in audiology.
Collapse
Affiliation(s)
- Himanshu Kumar Sanju
- Sri Jagdamba Charitable Eye Hospital and Cochlear Implant Center, Sri Ganganagar, Rajasthan, India
| | - Tushar Jain
- Sri Jagdamba Charitable Eye Hospital and Cochlear Implant Center, Sri Ganganagar, Rajasthan, India
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka, India
| |
Collapse
|
5
|
Ching TYC, Zhang VW, Ibrahim R, Bardy F, Rance G, Van Dun B, Sharma M, Chisari D, Dillon H. Acoustic change complex for assessing speech discrimination in normal-hearing and hearing-impaired infants. Clin Neurophysiol 2023; 149:121-132. [PMID: 36963143 DOI: 10.1016/j.clinph.2023.02.172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/15/2023] [Accepted: 02/12/2023] [Indexed: 03/08/2023]
Abstract
OBJECTIVE This study examined (1) the utility of a clinical system to record acoustic change complex (ACC, an event-related potential recorded by electroencephalography) for assessing speech discrimination in infants, and (2) the relationship between ACC and functional performance in real life. METHODS Participants included 115 infants (43 normal-hearing, 72 hearing-impaired), aged 3-12 months. ACCs were recorded using [szs], [uiu], and a spectral rippled noise high-pass filtered at 2 kHz as stimuli. Assessments were conducted at age 3-6 months and at 7-12 months. Functional performance was evaluated using a parent-report questionnaire, and correlations with ACC were examined. RESULTS The rates of onset and ACC responses of normal-hearing infants were not significantly different from those of aided infants with mild or moderate hearing loss but were significantly higher than those with severe loss. On average, response rates measured at 3-6 months were not significantly different from those at 7-12 months. Higher rates of ACC responses were significantly associated with better functional performance. CONCLUSIONS ACCs demonstrated auditory capacity for discrimination in infants by 3-6 months. This capacity was positively related to real-life functional performance. SIGNIFICANCE ACCs can be used to evaluate the effectiveness of amplification and monitor development in aided hearing-impaired infants.
Collapse
Affiliation(s)
- Teresa Y C Ching
- National Acoustic Laboratories, Australia; Macquarie School of Education, Macquarie University, Australia; NextSense Institute, Australia; School of Health and Rehabilitation Sciences, University of Queensland, Australia.
| | - Vicky W Zhang
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Ronny Ibrahim
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Fabrice Bardy
- National Acoustic Laboratories, Australia; School of Psychology, University of Auckland, New Zealand
| | - Gary Rance
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | | | - Mridula Sharma
- Department of Linguistics, Macquarie University, Australia
| | - Donella Chisari
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | - Harvey Dillon
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia; Department of Hearing, University of Manchester, United Kingdom
| |
Collapse
|
6
|
Cocquyt EM, Van Laeken H, van Mierlo P, De Letter M. Test-retest reliability of electroencephalographic and magnetoencephalographic measures elicited during language tasks: A literature review. Eur J Neurosci 2023; 57:1353-1367. [PMID: 36864752 DOI: 10.1111/ejn.15948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2022] [Revised: 02/10/2023] [Accepted: 02/22/2023] [Indexed: 03/04/2023]
Abstract
Electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings during language processing can provide relevant insights on neuroplasticity in clinical populations (including patients with aphasia). To use EEG and MEG in a longitudinal way, the outcome measures should be consistent across time in healthy individuals. Therefore, the current study provides a review on the test-retest reliability of EEG and MEG measures elicited during language paradigms in healthy adults. PubMed, Web of Science and Embase were searched for relevant articles based on specific eligibility criteria. In total, 11 articles were included in this literature review. The test-retest reliability of the P1, N1 and P2 is systematically considered to be satisfactory, whereas findings are more variable for event-related potentials/fields occurring later in time. The within subject consistency of EEG and MEG measures during language processing can be influenced by multiple variables such as the stimulus presentation mode, the offline reference choice and the required amount of cognitive resources during the task. To conclude, most of the available results are favourable regarding the longitudinal use of EEG and MEG measures elicited during language paradigms in healthy young individuals. In view to the use of these techniques in patients with aphasia, future research should focus on whether the same findings apply to different age groups.
Collapse
Affiliation(s)
| | - Heleen Van Laeken
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Pieter van Mierlo
- Department of Electronics and Information Systems, Medical Image and Signal Processing Group, Ghent University, Ghent, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
7
|
Slugocki C, Kuk F, Korhonen P. Left Lateralization of the Cortical Auditory-Evoked Potential Reflects Aided Processing and Speech-in-Noise Performance of Older Listeners With a Hearing Loss. Ear Hear 2023; 44:399-410. [PMID: 36331191 DOI: 10.1097/aud.0000000000001293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVES We analyzed the lateralization of the cortical auditory-evoked potential recorded previously from aided hearing-impaired listeners as part of a study on noise-mitigating hearing aid technologies. Specifically, we asked whether the degree of leftward lateralization in the magnitudes and latencies of these components was reduced by noise and, conversely, enhanced/restored by hearing aid technology. We further explored if individual differences in lateralization could predict speech-in-noise abilities in listeners when tested in the aided mode. DESIGN The study followed a double-blind within-subjects design. Nineteen older adults (8 females; mean age = 73.6 years, range = 56 to 86 years) with moderate to severe hearing loss participated. The cortical auditory-evoked potential was measured over 400 presentations of a synthetic /da/ stimulus which was delivered binaurally in a simulated aided mode using shielded ear-insert transducers. Sequences of the /da/ syllable were presented from the front at 75 dB SPL-C with continuous speech-shaped noise presented from the back at signal-to-noise ratios of 0, 5, and 10 dB. Four hearing aid conditions were tested: (1) omnidirectional microphone (OM) with noise reduction (NR) disabled, (2) OM with NR enabled, (3) directional microphone (DM) with NR disabled, and (4) DM with NR enabled. Lateralization of the P1 component and N1P2 complex was quantified across electrodes spanning the mid-coronal plane. Subsequently, listener speech-in-noise performance was assessed using the Repeat-Recall Test at the same signal-to-noise ratios and hearing aid conditions used to measure cortical activity. RESULTS As expected, both the P1 component and the N1P2 complex were of greater magnitude in electrodes over the left compared to the right hemisphere. In addition, N1 and P2 peaks tended to occur earlier over the left hemisphere, although the effect was mediated by an interaction of signal-to-noise ratio and hearing aid technology. At a group level, degrees of lateralization for the P1 component and the N1P2 complex were enhanced in the DM relative to the OM mode. Moreover, linear mixed-effects models suggested that the degree of leftward lateralization in the N1P2 complex, but not the P1 component, accounted for a significant portion of variability in speech-in-noise performance that was not related to age, hearing loss, hearing aid processing, or signal-to-noise ratio. CONCLUSIONS A robust leftward lateralization of cortical potentials was observed in older listeners when tested in the aided mode. Moreover, the degree of lateralization was enhanced by hearing aid technologies that improve the signal-to-noise ratio for speech. Accounting for the effects of signal-to-noise ratio, hearing aid technology, semantic context, and audiometric thresholds, individual differences in left-lateralized speech-evoked cortical activity were found to predict listeners' speech-in-noise abilities. Quantifying cortical auditory-evoked potential component lateralization may then be useful for profiling listeners' likelihood of communication success following clinical amplification.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, Illinois, USA
| | | | | |
Collapse
|
8
|
The Acoustic Change Complex Compared to Hearing Performance in Unilaterally and Bilaterally Deaf Cochlear Implant Users. Ear Hear 2022; 43:1783-1799. [PMID: 35696186 PMCID: PMC9592183 DOI: 10.1097/aud.0000000000001248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Clinical measures evaluating hearing performance in cochlear implant (CI) users depend on attention and linguistic skills, which limits the evaluation of auditory perception in some patients. The acoustic change complex (ACC), a cortical auditory evoked potential to a sound change, might yield useful objective measures to assess hearing performance and could provide insight in cortical auditory processing. The aim of this study is to examine the ACC in response to frequency changes as an objective measure for hearing performance in CI users. DESIGN Thirteen bilaterally deaf and six single-sided deaf subjects were included, all having used a unilateral CI for at least 1 year. Speech perception was tested with a consonant-vowel-consonant test (+10 dB signal-to-noise ratio) and a digits-in-noise test. Frequency discrimination thresholds were measured at two reference frequencies, using a 3-interval, 2-alternative forced-choice, adaptive staircase procedure. The two reference frequencies were selected using each participant's frequency allocation table and were centered in the frequency band of an electrode that included 500 or 2000 Hz, corresponding to the apical electrode or the middle electrode, respectively. The ACC was evoked with pure tones of the same two reference frequencies with varying frequency increases: within the frequency band of the middle or the apical electrode (+0.25 electrode step), and steps to the center frequency of the first (+1), second (+2), and third (+3) adjacent electrodes. RESULTS Reproducible ACCs were recorded in 17 out of 19 subjects. Most successful recordings were obtained with the largest frequency change (+3 electrode step). Larger frequency changes resulted in shorter N1 latencies and larger N1-P2 amplitudes. In both unilaterally and bilaterally deaf subjects, the N1 latency and N1-P2 amplitude of the CI ears correlated to speech perception as well as frequency discrimination, that is, short latencies and large amplitudes were indicative of better speech perception and better frequency discrimination. No significant differences in ACC latencies or amplitudes were found between the CI ears of the unilaterally and bilaterally deaf subjects, but the CI ears of the unilaterally deaf subjects showed substantially longer latencies and smaller amplitudes than their contralateral normal-hearing ears. CONCLUSIONS The ACC latency and amplitude evoked by tone frequency changes correlate well to frequency discrimination and speech perception capabilities of CI users. For patients unable to reliably perform behavioral tasks, the ACC could be of added value in assessing hearing performance.
Collapse
|
9
|
Cortical Auditory Evoked Potentials Recorded Directly Through the Cochlear Implant in Cochlear Implant Recipients: a Feasibility Study. Ear Hear 2022; 43:1426-1436. [DOI: 10.1097/aud.0000000000001212] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Nassar AAM, Bassiouny S, Abdel Rahman TT, Hanafy KM. Assessment of outcome measures after audiological computer-based auditory training in cochlear implant children. Int J Pediatr Otorhinolaryngol 2022; 160:111217. [PMID: 35816970 DOI: 10.1016/j.ijporl.2022.111217] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 06/12/2022] [Accepted: 06/22/2022] [Indexed: 10/17/2022]
Abstract
OBJECTIVE To validate the clinical use of acoustic change complex (ACC) as an objective outcome measure of auditory training in Egyptian cochlear implant (CI) children and explore how far electrophysiological measures could be correlated to behavioral measures in assessing training outcome. Also to explore the efficacy of computer-based auditory training programs (CBATP) in the rehabilitation process of CI children. METHODS Sixty Arabic speaking children participated in the present study. Forty children using monaural CI device served as study group (20 children in subgroup A and 20 children in subgroup B). Both subgroups received traditional speech therapy sessions, additionally subgroup (A) children received computer-based auditory training program (CBATP) at home for three months. Their age ranged from 8 to 17 years. 20 age and sex-matched normal hearing children served as control group as a standardization for the stimuli used to elicit ACC. The study group children were subjected to detailed history taking, parent reported questionnaire (MAIS, Arabic version), aided sound field evaluation, psychophysical evaluation using auditory fusion test (AFT), speech perception testing according to language age, ACC in response to gaps in 1000 Hz tones and language evaluation. This work-up was repeated after 3&6 months for both study subgroups. RESULTS Children of study subgroup (A) showed improvement of auditory fusion test (AFT) thresholds at 3 & 6 months post-training follow up. As regards acoustic change complex (ACC), it can be detected in 85% of subgroup (A) children, 85% of subgroup (B) children and 100% of control group children. Lower ACC gap detection thresholds were obtained only after 3 months in subgroup (A), while after 6 months in subgroup (B). There were statistically significant differences between initial assessment and 3 & 6 months follow up as regards ACC P1 and N2 latencies and amplitudes in both study subgroups, however in subgroup (A), ACC P1 amplitude at 6 months post-training was significantly larger than values of 3 months follow up. There was highly significant correlation between thresholds of AFT and ACC gap detection threshold. CONCLUSIONS ACC can be used as a reliable tool for evaluating auditory training outcome in CI children. ACC gap detection threshold can predict psychophysical temporal resolution after auditory training in difficult to test population. CBATP is an easy and accessible method which may be effective in improving CI outcome.
Collapse
Affiliation(s)
| | - Samia Bassiouny
- ORL Dept, Faculty of Medicine, Ain Shams University, Abassia Street, Cairo, Egypt
| | | | - Karim Mohamed Hanafy
- ORL Dept, Faculty of Medicine, Ain Shams University, Abassia Street, Cairo, Egypt
| |
Collapse
|
11
|
Bae EB, Jang H, Shim HJ. Enhanced Dichotic Listening and Temporal Sequencing Ability in Early-Blind Individuals. Front Psychol 2022; 13:840541. [PMID: 35619788 PMCID: PMC9127502 DOI: 10.3389/fpsyg.2022.840541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 04/13/2022] [Indexed: 11/23/2022] Open
Abstract
Several studies have reported the better auditory performance of early-blind subjects over sighted subjects. However, few studies have compared the auditory functions of both hemispheres or evaluated interhemispheric transfer and binaural integration in blind individuals. Therefore, we evaluated whether there are differences in dichotic listening, auditory temporal sequencing ability, or speech perception in noise (all of which have been used to diagnose central auditory processing disorder) between early-blind subjects and sighted subjects. The study included 23 early-blind subjects and 22 age-matched sighted subjects. In the dichotic listening test (three-digit pair), the early-blind subjects achieved higher scores than the sighted subjects in the left ear (p = 0.003, Bonferroni’s corrected α = 0.05/6 = 0.008), but not in the right ear, indicating a right ear advantage in sighted subjects (p < 0.001) but not in early-blind subjects. In the frequency patterning test (five tones), the early-blind subjects performed better (both ears in the humming response, but the left ear only in the labeling response) than the sighted subjects (p < 0.008, Bonferroni’s corrected α = 0.05/6 = 0.008). Monosyllable perception in noise tended to be better in early-blind subjects than in sighted subjects at a signal-to-noise ratio of –8 (p = 0.054), the results at signal-to-noise ratios of –4, 0, +4, and +8 did not differ. Acoustic change complex responses to/ba/in babble noise, recorded with electroencephalography, showed a greater N1 peak amplitude at only FC5 electrode under a signal-to-noise ratio of –8 and –4 dB in the early-blind subjects than in the sighted subjects (p = 0.004 and p = 0.003, respectively, Bonferroni’s corrected α = 0.05/5 = 0.01). The results of this study revealed early-blind subjects exhibited some advantages in dichotic listening, and temporal sequencing ability compared to those shown in sighted subjects. These advantages may be attributable to the enhanced activity of the central auditory nervous system, especially the right hemisphere function, and the transfer of auditory information between the two hemispheres.
Collapse
Affiliation(s)
- Eun Bit Bae
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University, Seoul, South Korea
| | - Hyunsook Jang
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, Hallym University, Chuncheon, South Korea
| | - Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Nowon Eulji Medical Center, Eulji University, Seoul, South Korea
| |
Collapse
|
12
|
Strahm S, Small SA, Chan S, Tian DY, Sharma M. The Maturation of the Acoustic Change Complex in Response to Iterated Ripple Noise in 'Normal'-Hearing Infants, Toddlers, and Adults. J Am Acad Audiol 2022; 33:301-310. [PMID: 35613945 DOI: 10.1055/a-1862-0198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
BACKGROUND Infants and toddlers are still being evaluated for their hearing sensitivity but not their auditory-processing skills. Iterated rippled noise (IRN) stimuli require the auditory system to utilize the temporal periodicity and autocorrelate the iterations to perceive pitch. PURPOSE This study investigated the acoustic change complex (ACC) elicited by IRN in "normal"-hearing infants, toddlers, and adults to determine the maturation of cortical processing of IRN stimuli. DESIGN Cortical responses to filtered white noise (onset) concatenated with IRN stimuli (d = 10 milliseconds, gain = 0.7 dB: 4-32 iterations) were recorded in quiet, alert participants. STUDY SAMPLE Participants included 25 infants (2.5-15 months), 27 toddlers (22-59 months), and 8 adults (19-25 years) with "normal" hearing sensitivity. DATA COLLECTION AND ANALYSIS Cortical auditory-evoked responses were recorded for each participant, including the onset response to the noise and an ACC to the transition from noise to IRN. Group differences were assessed using repeated-measures analyses of variance. RESULTS Most infants had a replicable onset (P) response, while only about half had a measurable ACC (PACC) response to the high-saliency IRN condition. Most toddlers had onset responses and showed a P-NACC response to the IRN16 and IRN32 conditions. Most of the toddler group had responses present to the onset and showed a P-NACC response to all IRN conditions. Toddlers and adults showed similar P-NACC amplitudes; however, adults showed an increase in N1ACC amplitude with increase in IRN iterations (i.e., increased salience). CONCLUSION While cortical responses to the percept of sound as determined by the onset response (P) to a stimulus are present in most infants, ACC responses to IRN stimuli are not mature in infancy. Most toddlers as young as 22 months, however, exhibited ACC responses to the IRN stimuli even when the pitch saliency was low (e.g., IRN4). The findings of the current study have implications for future research when investigating maturational effects on ACC and the optimal choice of stimuli.
Collapse
Affiliation(s)
- S Strahm
- School of Audiology and Speech Sciences, The University of British Columbia, Vancouver, Canada
| | - S A Small
- School of Audiology and Speech Sciences, The University of British Columbia, Vancouver, Canada
| | - S Chan
- School of Audiology and Speech Sciences, The University of British Columbia, Vancouver, Canada
| | - D Y Tian
- Department of Medicine, The University of Alberta, Edmonton, Canada
| | - M Sharma
- Department of Linguistics and The HEARing Cooperative Research Centre , Macquarie University, Sydney, Australia
| |
Collapse
|
13
|
Vonck BM, van Heteren JA, Lammers MJ, de Jel DV, Schaake WA, van Zanten GA, Stokroos RJ, Versnel H. Cortical potentials evoked by tone frequency changes can predict speech perception in noise. Hear Res 2022; 420:108508. [DOI: 10.1016/j.heares.2022.108508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Revised: 04/01/2022] [Accepted: 04/10/2022] [Indexed: 11/04/2022]
|
14
|
Song H, Jeon S, Shin Y, Han W, Kim S, Kwak C, Lee E, Kim J. Effects of Natural Versus Synthetic Consonant and Vowel Stimuli on Cortical Auditory-Evoked Potential. J Audiol Otol 2021; 26:68-75. [PMID: 34963276 PMCID: PMC8996083 DOI: 10.7874/jao.2021.00479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2021] [Accepted: 10/09/2021] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives Natural and synthetic speech signals effectively stimulate cortical auditory evoked potential (CAEP). This study aimed to select the speech materials for CAEP and identify CAEP waveforms according to gender of speaker (GS) and gender of listener (GL). Subjects and Methods Two experiments including a comparison of natural and synthetic stimuli and CAEP measurement were performed of 21 young announcers and 40 young adults. Plosive /g/ and /b/ and aspirated plosive /k/ and /p/ were combined to /a/. Six bisyllables–/ga/-/ka/, /ga/-/ba/, /ga/-/pa/, /ka/-/ba/, /ka/-/pa/, and /ba/-/pa/–were formulated as tentative forwarding and backwarding orders. In the natural and synthetic stimulation mode (SM) according to GS, /ka/ and /pa/ were selected through the first experiment used for CAEP measurement. Results The correction rate differences were largest (74%) at /ka/-/ pa/ and /pa/-/ka/; thus, they were selected as stimulation materals for CAEP measurement. The SM showed shorter latency with P2 and N1-P2 with natural stimulation and N2 with synthetic stimulation. The P2 amplitude was larger with natural stimulation. The SD showed significantly larger amplitude for P2 and N1-P2 with /pa/. The GS showed shorter latency for P2, N2, and N1-P2 and larger amplitude for N2 with female speakers. The GL showed shorter latency for N2 and N1-P2 and larger amplitude for N2 with female listeners. Conclusions Although several variables showed significance for N2, P2, and N1-P2, P1 and N1 did not show any significance for any variables. N2 and P2 of CAEP seemed affected by endogenous factors.
Collapse
|
15
|
McGuire K, Firestone GM, Zhang N, Zhang F. The Acoustic Change Complex in Response to Frequency Changes and Its Correlation to Cochlear Implant Speech Outcomes. Front Hum Neurosci 2021; 15:757254. [PMID: 34744668 PMCID: PMC8566680 DOI: 10.3389/fnhum.2021.757254] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 10/01/2021] [Indexed: 12/12/2022] Open
Abstract
One of the biggest challenges that face cochlear implant (CI) users is the highly variable hearing outcomes of implantation across patients. Since speech perception requires the detection of various dynamic changes in acoustic features (e.g., frequency, intensity, timing) in speech sounds, it is critical to examine the ability to detect the within-stimulus acoustic changes in CI users. The primary objective of this study was to examine the auditory event-related potential (ERP) evoked by the within-stimulus frequency changes (F-changes), one type of the acoustic change complex (ACC), in adult CI users, and its correlation to speech outcomes. Twenty-one adult CI users (29 individual CI ears) were tested with psychoacoustic frequency change detection tasks, speech tests including the Consonant-Nucleus-Consonant (CNC) word recognition, Arizona Biomedical Sentence Recognition in quiet and noise (AzBio-Q and AzBio-N), and the Digit-in-Noise (DIN) tests, and electroencephalographic (EEG) recordings. The stimuli for the psychoacoustic tests and EEG recordings were pure tones at three different base frequencies (0.25, 1, and 4 kHz) that contained a F-change at the midpoint of the tone. Results showed that the frequency change detection threshold (FCDT), ACC N1' latency, and P2' latency did not differ across frequencies (p > 0.05). ACC N1'-P2 amplitude was significantly larger for 0.25 kHz than for other base frequencies (p < 0.05). The mean N1' latency across three base frequencies was negatively correlated with CNC word recognition (r = -0.40, p < 0.05) and CNC phoneme (r = -0.40, p < 0.05), and positively correlated with mean FCDT (r = 0.46, p < 0.05). The P2' latency was positively correlated with DIN (r = 0.47, p < 0.05) and mean FCDT (r = 0.47, p < 0.05). There was no statistically significant correlation between N1'-P2' amplitude and speech outcomes (all ps > 0.05). Results of this study indicated that variability in CI speech outcomes assessed with the CNC, AzBio-Q, and DIN tests can be partially explained (approximately 16-21%) by the variability of cortical sensory encoding of F-changes reflected by the ACC.
Collapse
Affiliation(s)
- Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Gabrielle M. Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children’s Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
16
|
Vander Werff KR, Niemczak CE, Morse K. Informational Masking Effects of Speech Versus Nonspeech Noise on Cortical Auditory Evoked Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4014-4029. [PMID: 34464537 DOI: 10.1044/2021_jslhr-21-00048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Background noise has been categorized as energetic masking due to spectrotemporal overlap of the target and masker on the auditory periphery or informational masking due to cognitive-level interference from relevant content such as speech. The effects of masking on cortical and sensory auditory processing can be objectively studied with the cortical auditory evoked potential (CAEP). However, whether effects on neural response morphology are due to energetic spectrotemporal differences or informational content is not fully understood. The current multi-experiment series was designed to assess the effects of speech versus nonspeech maskers on the neural encoding of speech information in the central auditory system, specifically in terms of the effects of speech babble noise maskers varying by talker number. Method CAEPs were recorded from normal-hearing young adults in response to speech syllables in the presence of energetic maskers (white or speech-shaped noise) and varying amounts of informational maskers (speech babble maskers). The primary manipulation of informational masking was the number of talkers in speech babble, and results on CAEPs were compared to those of nonspeech maskers with different temporal and spectral characteristics. Results Even when nonspeech noise maskers were spectrally shaped and temporally modulated to speech babble maskers, notable changes in the typical morphology of the CAEP in response to speech stimuli were identified in the presence of primarily energetic maskers and speech babble maskers with varying numbers of talkers. Conclusions While differences in CAEP outcomes did not reach significance by number of talkers, neural components were significantly affected by speech babble maskers compared to nonspeech maskers. These results suggest an informational masking influence on neural encoding of speech information at the sensory cortical level of auditory processing, even without active participation on the part of the listener.
Collapse
Affiliation(s)
| | - Christopher E Niemczak
- Department of Communication Sciences and Disorders, Syracuse University, NY
- Geisel School of Medicine, Dartmouth, Hanover College, NH
| | - Kenneth Morse
- Department of Communication Sciences and Disorders, Syracuse University, NY
- Division of Communication Sciences and Disorders, West Virginia University, Morgantown
| |
Collapse
|
17
|
Lim SJ, Carter YD, Njoroge JM, Shinn-Cunningham BG, Perrachione TK. Talker discontinuity disrupts attention to speech: Evidence from EEG and pupillometry. BRAIN AND LANGUAGE 2021; 221:104996. [PMID: 34358924 PMCID: PMC8515637 DOI: 10.1016/j.bandl.2021.104996] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Revised: 07/11/2021] [Accepted: 07/13/2021] [Indexed: 05/13/2023]
Abstract
Speech is processed less efficiently from discontinuous, mixed talkers than one consistent talker, but little is known about the neural mechanisms for processing talker variability. Here, we measured psychophysiological responses to talker variability using electroencephalography (EEG) and pupillometry while listeners performed a delayed recall of digit span task. Listeners heard and recalled seven-digit sequences with both talker (single- vs. mixed-talker digits) and temporal (0- vs. 500-ms inter-digit intervals) discontinuities. Talker discontinuity reduced serial recall accuracy. Both talker and temporal discontinuities elicited P3a-like neural evoked response, while rapid processing of mixed-talkers' speech led to increased phasic pupil dilation. Furthermore, mixed-talkers' speech produced less alpha oscillatory power during working memory maintenance, but not during speech encoding. Overall, these results are consistent with an auditory attention and streaming framework in which talker discontinuity leads to involuntary, stimulus-driven attentional reorientation to novel speech sources, resulting in the processing interference classically associated with talker variability.
Collapse
Affiliation(s)
- Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| | - Yaminah D Carter
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | - J Michelle Njoroge
- Department of Speech, Language, and Hearing Sciences, Boston University, United States
| | | | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, United States.
| |
Collapse
|
18
|
Zhang Y, Pattamadilok C, Lau DKY, Bakhtiar M, Yim LY, Leung KY, Zhang C. Early Auditory Event-Related Potentials Are Modulated by Alphabetic Literacy Skills in Logographic Chinese Readers. Front Psychol 2021; 12:663166. [PMID: 34393900 PMCID: PMC8358453 DOI: 10.3389/fpsyg.2021.663166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 07/09/2021] [Indexed: 11/17/2022] Open
Abstract
The acquisition of an alphabetic orthography transforms speech processing in the human brain. Behavioral evidence shows that phonological awareness as assessed by meta-phonological tasks like phoneme judgment, is enhanced by alphabetic literacy acquisition. The current study investigates the time-course of the neuro-cognitive operations underlying this enhancement as revealed by event-related potentials (ERPs). Chinese readers with and without proficiency in Jyutping, a Romanization system of Cantonese, were recruited for an auditory onset phoneme judgment task; their behavioral responses and the elicited ERPs were examined. Proficient readers of Jyutping achieved higher response accuracy and exhibited more negative-going ERPs in three early ERP time-windows corresponding to the P1, N1, and P2 components. The phonological mismatch negativity component exhibited sensitivity to both onset and rhyme mismatch in the speech stimuli, but it was not modulated by alphabetic literacy skills. The sustained negativity in the P1-N1-P2 time-windows is interpreted as reflecting enhanced phonetic/phonological processing or attentional/awareness modulation associated with alphabetic literacy and phonological awareness skills.
Collapse
Affiliation(s)
- Yubin Zhang
- Department of Linguistics, University of Southern California, Los Angeles, CA, United States
| | - Chotiga Pattamadilok
- Laboratoire Parole et Langage (LPL), CNRS, Aix Marseille University, Aix-en-Provence, France
| | - Dustin Kai-Yan Lau
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Mehdi Bakhtiar
- Unit of Human Communication, Development, and Information Sciences, The University of Hong Kong, Hong Kong, China
| | - Long-Ying Yim
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ka-Yui Leung
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
19
|
Xie Z, Stakhovskaya O, Goupell MJ, Anderson S. Aging Effects on Cortical Responses to Tones and Speech in Adult Cochlear-Implant Users. J Assoc Res Otolaryngol 2021; 22:719-740. [PMID: 34231111 DOI: 10.1007/s10162-021-00804-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 05/19/2021] [Indexed: 11/29/2022] Open
Abstract
Age-related declines in auditory temporal processing contribute to speech understanding difficulties of older adults. These temporal processing deficits have been established primarily among acoustic-hearing listeners, but the peripheral and central contributions are difficult to separate. This study recorded cortical auditory evoked potentials from younger to middle-aged (< 65 years) and older (≥ 65 years) cochlear-implant (CI) listeners to assess age-related changes in temporal processing, where cochlear processing is bypassed in this population. Aging effects were compared to age-matched normal-hearing (NH) listeners. Advancing age was associated with prolonged P2 latencies in both CI and NH listeners in response to a 1000-Hz tone or a syllable /da/, and with prolonged N1 latencies in CI listeners in response to the syllable. Advancing age was associated with larger N1 amplitudes in NH listeners. These age-related changes in latency and amplitude were independent of stimulus presentation rate. Further, CI listeners exhibited prolonged N1 and P2 latencies and smaller P2 amplitudes than NH listeners. Thus, aging appears to degrade some aspects of auditory temporal processing when peripheral-cochlear contributions are largely removed, suggesting that changes beyond the cochlea may contribute to age-related temporal processing deficits.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, KS, 66160, USA.
| | - Olga Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
20
|
Kösemihal E, Akdas F. The Effect of Nonlinear Frequency Compression on Acoustic Change Complex Responses in High-Frequency Dead-Regioned Hearing Loss. J Am Acad Audiol 2021; 32:164-170. [PMID: 34030193 DOI: 10.1055/s-0041-1722948] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
PURPOSE The study is concern with the distinguishing of the stimuli containing high frequency information with the frequency compression feature at the cortical level using the acoustic change complex (ACC) and the comparison of such with the ACC answers of individuals with normal hearing. RESEARCH DESIGN This is a case-control study. STUDY SAMPLE Thirty adults (21 males and nine females) with normal hearing, ranging in age between 16 and 63 years (mean: 36.7 ± 12.9 years) and 20 adults (16 males and four females) with hearing loss ranging in age between 16 and 70 years (mean:49.0 ± 19.8 years) have been included in this study. DATA COLLECTION AND ANALYSIS A total of 1,000 ms long stimulus containing 500 and 4,000 Hz tonal stimuli was used for ACC recording. The start frequency (SF) and compression ratio (CR) parameters of the hearing aids were programmed according to the default settings (SFd, CRd) in the device software, the optimal setting (SFo, CRo), and the extra compression (SFe, CRe) requirements and ACC has been recorded for each condition. Evaluation has been performed according to P1-N1-P2 wave complex and ACC complex wave latencies. Independent samples t-test was used to test the significance of the differences between the groups. RESULTS In all individuals ACC has been observed. There was a significant difference between the wave latencies in normal hearing- and hearing-impaired groups. All wave latency averages of the individuals with hearing impairment were longer than the individuals with normal hearing. There were statistically significant differences between SFd-SFo, SFd-SFe, and SFo-SFe parameters. But there was no difference between CRd, CRo, and CRe in terms of CRs. CONCLUSION In order to discriminate high frequency information at the cortical level we should not rely on default settings of the SF and CR of the hearing aids. Optimal bandwidth must be adjusted without performing insufficient compression or over-compression. ACC can be used besides the real ear measurement for hearing aid fitting.
Collapse
Affiliation(s)
- Ebru Kösemihal
- Department of Audiology, Near East University, Nicosia, Cyprus
| | - Ferda Akdas
- Department of Audiology, Marmara University School of Medicine, Istanbul, Turkey
| |
Collapse
|
21
|
Soleimani M, Rouhbakhsh N, Rahbar N. Towards early intervention of hearing instruments using cortical auditory evoked potentials (CAEPs): A systematic review. Int J Pediatr Otorhinolaryngol 2021; 144:110698. [PMID: 33839460 DOI: 10.1016/j.ijporl.2021.110698] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2020] [Revised: 02/14/2021] [Accepted: 03/24/2021] [Indexed: 10/21/2022]
Abstract
As a result of newborn hearing screening, hearing aids are usually prescribed and fitted by 2-3 months of age. However, the assessment data used for prescribing hearing aids in infants and toddlers are limited in quality and quantity. There is great interest in finding appropriate physiological measures that can be help to facilitate and improve the management process of hearing impaired children. It seems that cortical auditory evoked potentials (CAEPs) can provide information before it is possible to obtain reliable information from behavioral assessment procedures. This article will review the studies conducted in this area during the past15 years to determine the advantages, disadvantages and future research areas of CAEPs as an objective method in the management of hearing impaired children.
Collapse
Affiliation(s)
- Marjan Soleimani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran.
| | - Nematollah Rouhbakhsh
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences (TUMS), Tehran, Iran.
| | - Nariman Rahbar
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences (IUMS), Tehran, Iran.
| |
Collapse
|
22
|
Reliability of Serological Prestin Levels in Humans and its Relation to Otoacoustic Emissions, a Functional Measure of Outer Hair Cells. Ear Hear 2021; 42:1151-1162. [PMID: 33859120 DOI: 10.1097/aud.0000000000001026] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Serological biomarkers, common to many areas of medicine, have the potential to inform on the health of the human body and to give early warning of risk of compromised function or illness before symptoms are experienced. Serological measurement of prestin, a motor protein uniquely produced and expressed in outer hair cells, has recently been identified as a potential biomarker to inform on the health of the cochlea. Before any test can be introduced into the clinical toolkit, the reproducibility of the measurement when repeated in the same subject must be considered. The primary objective of this study is to outline the test-retest reliability estimates and normative ranges for serological prestin in healthy young adults with normal hearing. In addition, we examine the relation between serum prestin levels and otoacoustic emissions (OAEs) to compare this OHC-specific protein to the most common measure of OHC function currently used in hearing assessments. DESIGN We measured prestin levels serologically from circulating blood in 34 young adults (18 to 24 years old) with clinically normal pure-tone audiometric averages at five different timepoints up to six months apart (average intervals between measurements ranged from <1 week to 7 weeks apart). To guide future studies of clinical populations, we present the standard error of the measurement, reference normative values, and multiple measures of reliability. Additionally, we measured transient evoked OAEs at the same five timepoints and used correlation coefficients to examine the relation between OAEs and prestin levels (pg/mL). RESULTS Serum prestin levels demonstrated good to excellent reliability between and across the five different time points, with correlation coefficients and intraclass correlations >0.8. Across sessions, the average serum prestin level was 250.20 pg/mL, with a standard error of measurement of 7.28 pg/mL. Moreover, positive correlations (generally weak to moderate) were found between prestin levels and OAE magnitudes and signal-to-noise ratios. CONCLUSIONS Findings characterize serum prestin in healthy young adults with normal hearing and provide initial normative data that may be critical to interpreting results from individuals with sensorineural hearing loss. Our results demonstrate reliability of serum prestin levels in a sample of normal-hearing young adults across five test sessions up to 6 months apart, paving the way for testing larger samples to more accurately estimate test-retest standards for clinical protocols, including those involving serial monitoring. The positive correlations between serum prestin and OAE levels, although weak to moderate, reinforce that the source of serum prestin is likely the outer hair cells in the inner ear, but also that serum prestin and OAEs each may also index aspects of biologic function not common to the other.
Collapse
|
23
|
Cortical potentials evoked by tone frequency changes compared to frequency discrimination and speech perception: Thresholds in normal-hearing and hearing-impaired subjects. Hear Res 2020; 401:108154. [PMID: 33387905 DOI: 10.1016/j.heares.2020.108154] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 11/29/2020] [Accepted: 12/08/2020] [Indexed: 11/21/2022]
Abstract
Frequency discrimination ability varies within the normal hearing population, partially explained by factors such as musical training and age, and it deteriorates with hearing loss. Frequency discrimination, while essential for several auditory tasks, is not routinely measured in clinical setting. This study investigates cortical auditory evoked potentials in response to frequency changes, known as acoustic change complexes (ACCs), and explores their value as a clinically applicable objective measurement of frequency discrimination. In 12 normal-hearing and 13 age-matched hearing-impaired subjects, ACC thresholds were recorded at 4 base frequencies (0.5, 1, 2, 4 kHz) and compared to psychophysically assessed frequency discrimination thresholds. ACC thresholds had a moderate to strong correlation to psychophysical frequency discrimination thresholds. In addition, ACC thresholds increased with hearing loss and higher ACC thresholds were associated with poorer speech perception in noise. The ACC threshold in response to a frequency change therefore holds promise as an objective clinical measurement in hearing impairment, indicative of frequency discrimination ability and related to speech perception. However, recordings as conducted in the current study are relatively time consuming. The current clinical application would be most relevant in cases where behavioral testing is unreliable.
Collapse
|
24
|
Bell KL, Lister JJ, Conter R, Harrison Bush AL, O'Brien J. Cognitive Event-Related Potential Responses Differentiate Older Adults with and without Probable Mild Cognitive Impairment. Exp Aging Res 2020; 47:145-164. [PMID: 33342371 DOI: 10.1080/0361073x.2020.1861838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Background: Older adults rarely seek cognitive assessment, but often visit other healthcare professionals (e.g., audiologists). Noninvasive clinical measures within the scopes of practice of those professions sensitive to cognitive impairment are needed. Purpose: This study examined the differences of probable mild cognitive impairment (MCI) on latency and mean amplitude of the P3b auditory event-related potential. Method: Fifty-four participants comprised two groups according to cognitive status (cognitively normal older adults [CNOA], n = 25; probable MCI, n = 29). P3b was recorded using an oddball paradigm for speech (/ba/, /da/) and non-speech (1000, 2000 Hz) stimuli. Amplitudes and latencies were compared from six electrodes (FPz, Fz, FCz, Cz, CPz, Pz) between groups across stimulus probability and type. Results: CNOA participants had larger P3b mean amplitudes for deviant stimuli than those with probable MCI. Group effects of latency were isolated to deviant stimuli at FCz only when those with unclear P3bs were included. Findings did not covary with age or education. Overall, CNOAs showed a large P3b oddball effect while those with probable MCI did not. Conclusions: P3b can be used to show electrophysiological differences between older adults with and without probable MCI. These results support the development of educational materials targeting professionals using auditory-evoked potentials.
Collapse
Affiliation(s)
- Karen L Bell
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Jennifer Jones Lister
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Rachel Conter
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA
| | - Aryn L Harrison Bush
- Department of Communication Sciences and Disorders, University of South Florida , Tampa, Florida, USA.,Department of Brain Health and Cognition, Reliance Medical Centers , Lakeland, Florida, USA
| | - Jennifer O'Brien
- Department of Psychology, University of South Florida , Tampa, Florida, USA
| |
Collapse
|
25
|
Parker A, Slack C, Skoe E. Comparisons of Auditory Brainstem Responses Between a Laboratory and Simulated Home Environment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3877-3892. [PMID: 33108246 DOI: 10.1044/2020_jslhr-20-00383] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Miniaturization of digital technologies has created new opportunities for remote health care and neuroscientific fieldwork. The current study assesses comparisons between in-home auditory brainstem response (ABR) recordings and recordings obtained in a traditional lab setting. Method Click-evoked and speech-evoked ABRs were recorded in 12 normal-hearing, young adult participants over three test sessions in (a) a shielded sound booth within a research lab, (b) a simulated home environment, and (c) the research lab once more. The same single-family house was used for all home testing. Results Analyses of ABR latencies, a common clinical metric, showed high repeatability between the home and lab environments across both the click-evoked and speech-evoked ABRs. Like ABR latencies, response consistency and signal-to-noise ratio (SNR) were robust both in the lab and in the home and did not show significant differences between locations, although variability between the home and lab was higher than latencies, with two participants influencing this lower repeatability between locations. Response consistency and SNR also patterned together, with a trend for higher SNRs to pair with more consistent responses in both the home and lab environments. Conclusions Our findings demonstrate the feasibility of obtaining high-quality ABR recordings within a simulated home environment that closely approximate those recorded in a more traditional recording environment. This line of work may open doors to greater accessibility to underserved clinical and research populations.
Collapse
Affiliation(s)
- Ashley Parker
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs
| | - Candace Slack
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs
| |
Collapse
|
26
|
Dynamic Time-Locking Mechanism in the Cortical Representation of Spoken Words. eNeuro 2020; 7:ENEURO.0475-19.2020. [PMID: 32513662 PMCID: PMC7470935 DOI: 10.1523/eneuro.0475-19.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2019] [Revised: 05/15/2020] [Accepted: 06/01/2020] [Indexed: 11/21/2022] Open
Abstract
Human speech has a unique capacity to carry and communicate rich meanings. However, it is not known how the highly dynamic and variable perceptual signal is mapped to existing linguistic and semantic representations. In this novel approach, we used the natural acoustic variability of sounds and mapped them to magnetoencephalography (MEG) data using physiologically-inspired machine-learning models. We aimed at determining how well the models, differing in their representation of temporal information, serve to decode and reconstruct spoken words from MEG recordings in 16 healthy volunteers. We discovered that dynamic time-locking of the cortical activation to the unfolding speech input is crucial for the encoding of the acoustic-phonetic features of speech. In contrast, time-locking was not highlighted in cortical processing of non-speech environmental sounds that conveyed the same meanings as the spoken words, including human-made sounds with temporal modulation content similar to speech. The amplitude envelope of the spoken words was particularly well reconstructed based on cortical evoked responses. Our results indicate that speech is encoded cortically with especially high temporal fidelity. This speech tracking by evoked responses may partly reflect the same underlying neural mechanism as the frequently reported entrainment of the cortical oscillations to the amplitude envelope of speech. Furthermore, the phoneme content was reflected in cortical evoked responses simultaneously with the spectrotemporal features, pointing to an instantaneous transformation of the unfolding acoustic features into linguistic representations during speech processing.
Collapse
|
27
|
Uhrig S, Perkis A, Behne DM. Effects of speech transmission quality on sensory processing indicated by the cortical auditory evoked potential. J Neural Eng 2020; 17:046021. [PMID: 32422617 DOI: 10.1088/1741-2552/ab93e1] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/23/2023]
Abstract
OBJECTIVE Degradations of transmitted speech have been shown to affect perceptual and cognitive processing in human listeners, as indicated by the P3 component of the event-related brain potential (ERP). However, research suggests that previously observed P3 modulations might actually be traced back to earlier neural modulations in the time range of the P1-N1-P2 complex of the cortical auditory evoked potential (CAEP). This study investigates whether auditory sensory processing, as reflected by the P1-N1-P2 complex, is already systematically altered by speech quality degradations. APPROACH Electrophysiological data from two studies were analyzed to examine effects of speech transmission quality (high-quality, noisy, bandpass-filtered) for spoken words on amplitude and latency parameters of individual P1, N1 and P2 components. MAIN RESULTS In the resultant ERP waveforms, an initial P1-N1-P2 manifested at stimulus onset, while a second N1-P2 occurred within the ongoing stimulus. Bandpass-filtered versus high-quality word stimuli evoked a faster and larger initial N1 as well as a reduced initial P2, hence exhibiting effects as early as the sensory stage of auditory information processing. SIGNIFICANCE The results corroborate the existence of systematic quality-related modulations in the initial N1-P2, which may potentially have carried over into P3 modulations demonstrated by previous studies. In future psychophysiological speech quality assessments, rigorous control procedures are needed to ensure the validity of P3-based indication of speech transmission quality. An alternative CAEP-based assessment approach is discussed, which promises to be more efficient and less constrained than the established approach based on P3.
Collapse
Affiliation(s)
- Stefan Uhrig
- Quality and Usability Lab, Technische Universität Berlin, D-10587 Berlin, Germany. Department of Electronic Systems, Norwegian University of Science and Technology, 7491 Trondheim, Norway. Author to whom any correspondence should be addressed
| | | | | |
Collapse
|
28
|
Whitten A, Key AP, Mefferd AS, Bodfish JW. Auditory event-related potentials index faster processing of natural speech but not synthetic speech over nonspeech analogs in children. BRAIN AND LANGUAGE 2020; 207:104825. [PMID: 32563764 DOI: 10.1016/j.bandl.2020.104825] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Revised: 05/29/2020] [Accepted: 05/30/2020] [Indexed: 06/11/2023]
Abstract
Given the crucial role of speech sounds in human language, it may be beneficial for speech to be supported by more efficient auditory and attentional neural processing mechanisms compared to nonspeech sounds. However, previous event-related potential (ERP) studies have found either no differences or slower auditory processing of speech than nonspeech, as well as inconsistent attentional processing. We hypothesized that this may be due to the use of synthetic stimuli in past experiments. The present study measured ERP responses during passive listening to both synthetic and natural speech and complexity-matched nonspeech analog sounds in 22 8-11-year-old children. We found that although children were more likely to show immature auditory ERP responses to the more complex natural stimuli, ERP latencies were significantly faster to natural speech compared to cow vocalizations, but were significantly slower to synthetic speech compared to tones. The attentional results indicated a P3a orienting response only to the cow sound, and we discuss potential methodological reasons for this. We conclude that our results support more efficient auditory processing of natural speech sounds in children, though more research with a wider array of stimuli will be necessary to confirm these results. Our results also highlight the importance of using natural stimuli in research investigating the neurobiology of language.
Collapse
Affiliation(s)
- Allison Whitten
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA.
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - Antje S Mefferd
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA
| | - James W Bodfish
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S., Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt Psychiatric Hospital, 1601 23rd Ave. S, Nashville, TN, USA; Vanderbilt Kennedy Center, 110 Magnolia Cir, Nashville, TN, USA; Vanderbilt Brain Institute, 6133 Medical Research Building III, 465 21st Avenue S., Nashville, TN, USA
| |
Collapse
|
29
|
McFayden TC, Baskin P, Stephens JDW, He S. Cortical Auditory Event-Related Potentials and Categorical Perception of Voice Onset Time in Children With an Auditory Neuropathy Spectrum Disorder. Front Hum Neurosci 2020; 14:184. [PMID: 32523521 PMCID: PMC7261872 DOI: 10.3389/fnhum.2020.00184] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/27/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: This study evaluated cortical encoding of voice onset time (VOT) in quiet and noise, and their potential associations with the behavioral categorical perception of VOT in children with auditory neuropathy spectrum disorder (ANSD). Design: Subjects were 11 children with ANSD ranging in age between 6.4 and 16.2 years. The stimulus was an /aba/-/apa/ vowel-consonant-vowel continuum comprising eight tokens with VOTs ranging from 0 ms (voiced endpoint) to 88 ms (voiceless endpoint). For speech in noise, speech tokens were mixed with the speech-shaped noise from the Hearing In Noise Test at a signal-to-noise ratio (SNR) of +5 dB. Speech-evoked auditory event-related potentials (ERPs) and behavioral categorization perception of VOT were measured in quiet in all subjects, and at an SNR of +5 dB in seven subjects. The stimuli were presented at 35 dB SL (re: pure tone average) or 115 dB SPL if this limit was less than 35 dB SL. In addition to the onset response, the auditory change complex (ACC) elicited by VOT was recorded in eight subjects. Results: Speech evoked ERPs recorded in all subjects consisted of a vertex positive peak (i.e., P1), followed by a trough occurring approximately 100 ms later (i.e., N2). For results measured in quiet, there was no significant difference in categorical boundaries estimated using ERP measures and behavioral procedures. Categorical boundaries estimated in quiet using both ERP and behavioral measures closely correlated with the most-recently measured Phonetically Balanced Kindergarten (PBK) scores. Adding a competing background noise did not affect categorical boundaries estimated using either behavioral or ERP procedures in three subjects. For the other four subjects, categorical boundaries estimated in noise using behavioral measures were prolonged. However, adding background noise only increased categorical boundaries measured using ERPs in three out of these four subjects. Conclusions: VCV continuum can be used to evaluate behavioral identification and the neural encoding of VOT in children with ANSD. In quiet, categorical boundaries of VOT estimated using behavioral measures and ERP recordings are closely associated with speech recognition performance in children with ANSD. Underlying mechanisms for excessive speech perception deficits in noise may vary for individual patients with ANSD.
Collapse
Affiliation(s)
- Tyler C McFayden
- Department of Psychology, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Paola Baskin
- Department of Anesthesiology, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Joseph D W Stephens
- Department of Psychology, North Carolina Agricultural and Technical State University, Greensboro, NC, United States
| | - Shuman He
- Department of Otolaryngology-Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus, OH, United States.,Department of Audiology, Nationwide Children's Hospital, Columbus, OH, United States
| |
Collapse
|
30
|
Randazzo M, Priefer R, Smith PJ, Nagler A, Avery T, Froud K. Neural Correlates of Modality-Sensitive Deviance Detection in the Audiovisual Oddball Paradigm. Brain Sci 2020; 10:brainsci10060328. [PMID: 32481538 PMCID: PMC7348766 DOI: 10.3390/brainsci10060328] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2020] [Revised: 05/15/2020] [Accepted: 05/25/2020] [Indexed: 11/16/2022] Open
Abstract
The McGurk effect, an incongruent pairing of visual /ga/–acoustic /ba/, creates a fusion illusion /da/ and is the cornerstone of research in audiovisual speech perception. Combination illusions occur given reversal of the input modalities—auditory /ga/-visual /ba/, and percept /bga/. A robust literature shows that fusion illusions in an oddball paradigm evoke a mismatch negativity (MMN) in the auditory cortex, in absence of changes to acoustic stimuli. We compared fusion and combination illusions in a passive oddball paradigm to further examine the influence of visual and auditory aspects of incongruent speech stimuli on the audiovisual MMN. Participants viewed videos under two audiovisual illusion conditions: fusion with visual aspect of the stimulus changing, and combination with auditory aspect of the stimulus changing, as well as two unimodal auditory- and visual-only conditions. Fusion and combination deviants exerted similar influence in generating congruency predictions with significant differences between standards and deviants in the N100 time window. Presence of the MMN in early and late time windows differentiated fusion from combination deviants. When the visual signal changes, a new percept is created, but when the visual is held constant and the auditory changes, the response is suppressed, evoking a later MMN. In alignment with models of predictive processing in audiovisual speech perception, we interpreted our results to indicate that visual information can both predict and suppress auditory speech perception.
Collapse
Affiliation(s)
- Melissa Randazzo
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
- Correspondence: ; Tel.: +1-516-877-4769
| | - Ryan Priefer
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Paul J. Smith
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Amanda Nagler
- Department of Communication Sciences and Disorders, Adelphi University, Garden City, NY 11530, USA; (R.P.); (A.N.)
| | - Trey Avery
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| | - Karen Froud
- Neuroscience and Education, Department of Biobehavioral Sciences, Teachers College, Columbia University, New York, NY 10027, USA; (P.J.S.); (T.A.); (K.F.)
| |
Collapse
|
31
|
Pike M, Biagio-de Jager L, le Roux T, Hofmeyr LM. Short-Term Test-Retest Reliability of Electrically Evoked Cortical Auditory Potentials in Adult Cochlear Implant Recipients. Front Neurol 2020; 11:305. [PMID: 32411080 PMCID: PMC7198904 DOI: 10.3389/fneur.2020.00305] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Accepted: 03/30/2020] [Indexed: 11/13/2022] Open
Abstract
Background: Late latency auditory evoked potentials (LLAEPs) provide objective evidence of an individual's central auditory processing abilities. Electrically evoked cortical auditory evoked potentials (eCAEPs) are a type of LLAEP that provides an objective measure of aided speech perception and auditory processing abilities in cochlear implant (CI) recipients. Aim: To determine the short-term test-retest reliability of eCAEPs in adult CI recipients. Design: An explorative, within-subject repeated measures research design was employed. Study Sample: The study sample included 12 post-lingually deafened, unilaterally implanted adult CI recipients with at least 9 months of CI experience. Method: eCAEPs representing basal, medial and apical cochlear regions were recorded in the implanted ears of each participant. Measurements were repeated 7 days after the initial assessment. Results: No significant differences between either median latencies or amplitudes at test and retest sessions (p > 0.05) were found when results for apical, medial and basal electrodes were averaged together. Mean intraclass correlation coefficient (ICC) scores averaged across basal, medial and apical cochlear stimulus regions indicated that both consistency and agreement were statistically significant and ranged from moderate to good (ICC = 0.58-0.86, p < 0.05). ICC confidence intervals did demonstrate considerable individual variability in both latency and amplitudes. Conclusion: eCAEP latencies and amplitudes demonstrated moderate to good short-term test-retest reliability. However, confidence intervals indicated individual variability in measurement consistency which is likely linked to attention and listening effort required from the CI recipients.
Collapse
Affiliation(s)
- Meghan Pike
- Department of Speech Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Leigh Biagio-de Jager
- Department of Speech Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Talita le Roux
- Department of Speech Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Louis M Hofmeyr
- Department of Speech Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
32
|
Vanheusden FJ, Kegler M, Ireland K, Georga C, Simpson DM, Reichenbach T, Bell SL. Hearing Aids Do Not Alter Cortical Entrainment to Speech at Audible Levels in Mild-to-Moderately Hearing-Impaired Subjects. Front Hum Neurosci 2020; 14:109. [PMID: 32317951 PMCID: PMC7147120 DOI: 10.3389/fnhum.2020.00109] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2019] [Accepted: 03/11/2020] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND Cortical entrainment to speech correlates with speech intelligibility and attention to a speech stream in noisy environments. However, there is a lack of data on whether cortical entrainment can help in evaluating hearing aid fittings for subjects with mild to moderate hearing loss. One particular problem that may arise is that hearing aids may alter the speech stimulus during (pre-)processing steps, which might alter cortical entrainment to the speech. Here, the effect of hearing aid processing on cortical entrainment to running speech in hearing impaired subjects was investigated. METHODOLOGY Seventeen native English-speaking subjects with mild-to-moderate hearing loss participated in the study. Hearing function and hearing aid fitting were evaluated using standard clinical procedures. Participants then listened to a 25-min audiobook under aided and unaided conditions at 70 dBA sound pressure level (SPL) in quiet conditions. EEG data were collected using a 32-channel system. Cortical entrainment to speech was evaluated using decoders reconstructing the speech envelope from the EEG data. Null decoders, obtained from EEG and the time-reversed speech envelope, were used to assess the chance level reconstructions. Entrainment in the delta- (1-4 Hz) and theta- (4-8 Hz) band, as well as wideband (1-20 Hz) EEG data was investigated. RESULTS Significant cortical responses could be detected for all but one subject in all three frequency bands under both aided and unaided conditions. However, no significant differences could be found between the two conditions in the number of responses detected, nor in the strength of cortical entrainment. The results show that the relatively small change in speech input provided by the hearing aid was not sufficient to elicit a detectable change in cortical entrainment. CONCLUSION For subjects with mild to moderate hearing loss, cortical entrainment to speech in quiet at an audible level is not affected by hearing aids. These results clear the pathway for exploring the potential to use cortical entrainment to running speech for evaluating hearing aid fitting at lower speech intensities (which could be inaudible when unaided), or using speech in noise conditions.
Collapse
Affiliation(s)
- Frederique J. Vanheusden
- Department of Engineering, School of Science and Technology, Nottingham Trent University, Nottingham, United Kingdom
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Katie Ireland
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Constantina Georga
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - David M. Simpson
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, London, United Kingdom
| | - Steven L. Bell
- Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
33
|
Balkenhol T, Wallhäusser-Franke E, Rotter N, Servais JJ. Changes in Speech-Related Brain Activity During Adaptation to Electro-Acoustic Hearing. Front Neurol 2020; 11:161. [PMID: 32300327 PMCID: PMC7145411 DOI: 10.3389/fneur.2020.00161] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 02/19/2020] [Indexed: 12/17/2022] Open
Abstract
Objectives: Hearing improves significantly with bimodal provision, i.e., a cochlear implant (CI) at one ear and a hearing aid (HA) at the other, but performance shows a high degree of variability resulting in substantial uncertainty about the performance that can be expected by the individual CI user. The objective of this study was to explore how auditory event-related potentials (AERPs) of bimodal listeners in response to spoken words approximate the electrophysiological response of normal hearing (NH) listeners. Study Design: Explorative prospective analysis during the first 6 months of bimodal listening using a within-subject repeated measures design. Setting: Academic tertiary care center. Participants: Twenty-seven adult participants with bilateral sensorineural hearing loss who received a HiRes 90K CI and continued use of a HA at the non-implanted ear. Age-matched NH listeners served as controls. Intervention: Cochlear implantation. Main Outcome Measures: Obligatory auditory evoked potentials N1 and P2, and the event-related N2 potential in response to monosyllabic words and their reversed sound traces before, as well as 3 and 6 months post-implantation. The task required word/non-word classification. Stimuli were presented within speech-modulated noise. Loudness of word/non-word signals was adjusted individually to achieve the same intelligibility across groups and assessments. Results: Intelligibility improved significantly with bimodal hearing, and the N1-P2 response approximated the morphology seen in NH with enhanced and earlier responses to the words compared to their reversals. For bimodal listeners, a prominent negative deflection was present between 370 and 570 ms post stimulus onset (N2), irrespective of stimulus type. This was absent for NH controls; hence, this response did not approximate the NH response during the study interval. N2 source localization evidenced extended activation of general cognitive areas in frontal and prefrontal brain areas in the CI group. Conclusions: Prolonged and spatially extended processing in bimodal CI users suggests employment of additional auditory-cognitive mechanisms during speech processing. This does not reduce within 6 months of bimodal experience and may be a correlate of the enhanced listening effort described by CI listeners.
Collapse
|
34
|
Tepe V, Papesh M, Russell S, Lewis MS, Pryor N, Guillory L. Acquired Central Auditory Processing Disorder in Service Members and Veterans. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:834-857. [PMID: 32163310 DOI: 10.1044/2019_jslhr-19-00293] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose A growing body of evidence suggests that military service members and military veterans are at risk for deficits in central auditory processing. Risk factors include exposure to blast, neurotrauma, hazardous noise, and ototoxicants. We overview these risk factors and comorbidities, address implications for clinical assessment and care of central auditory processing deficits in service members and veterans, and specify knowledge gaps that warrant research. Method We reviewed the literature to identify studies of risk factors, assessment, and care of central auditory processing deficits in service members and veterans. We also assessed the current state of the science for knowledge gaps that warrant additional study. This literature review describes key findings relating to military risk factors and clinical considerations for the assessment and care of those exposed. Conclusions Central auditory processing deficits are associated with exposure to known military risk factors. Research is needed to characterize mechanisms, sources of variance, and differential diagnosis in this population. Existing best practices do not explicitly consider confounds faced by military personnel. Assessment and rehabilitation strategies that account for these challenges are needed. Finally, investment is critical to ensure that Veterans Affairs and Department of Defense clinical staff are informed, trained, and equipped to implement effective patient care.
Collapse
Affiliation(s)
- Victoria Tepe
- Department of Defense Hearing Center of Excellence, JBSA Lackland, TX
- The Geneva Foundation, Tacoma, WA
| | - Melissa Papesh
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Shoshannah Russell
- Walter Reed National Military Medical Center, Bethesda, MD
- Henry Jackson Foundation, Bethesda, MD
| | - M Samantha Lewis
- VA RR&D National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland
- School of Audiology, Pacific University, Hillsboro, OR
| | - Nina Pryor
- Department of Defense Hearing Center of Excellence, JBSA Lackland, TX
- Air Force Research Laboratory, Wright-Patterson Air Force Base, OH
| | - Lisa Guillory
- Harry S. Truman Memorial Veterans' Hospital, Columbia, MO
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, University of Missouri, Columbia
| |
Collapse
|
35
|
Han JH, Dimitrijevic A. Acoustic Change Responses to Amplitude Modulation in Cochlear Implant Users: Relationships to Speech Perception. Front Neurosci 2020; 14:124. [PMID: 32132897 PMCID: PMC7040081 DOI: 10.3389/fnins.2020.00124] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 01/31/2020] [Indexed: 11/13/2022] Open
Abstract
Objectives The ability to understand speech is highly variable in people with cochlear implants (CIs) and to date, there are no objective measures that identify the root of this discrepancy. However, behavioral measures of temporal processing such as the temporal modulation transfer function (TMTF) has previously found to be related to vowel and consonant identification in CI users. The acoustic change complex (ACC) is a cortical auditory-evoked potential response that can be elicited by a “change” in an ongoing stimulus. In this study, the ACC elicited by amplitude modulation (AM) change was related to measures of speech perception as well as the amplitude detection threshold in CI users. Methods Ten CI users (mean age: 50 years old) participated in this study. All subjects participated in behavioral tests that included both speech and amplitude modulation detection to obtain a TMTF. CI users were categorized as “good” (n = 6) or “poor” (n = 4) based on their speech-in noise score (<50%). 64-channel electroencephalographic recordings were conducted while CI users passively listened to AM change sounds that were presented in a free field setting. The AM change stimulus was white noise with four different AM rates (4, 40, 100, and 300 Hz). Results Behavioral results show that AM detection thresholds in CI users were higher compared to the normal-hearing (NH) group for all AM rates. The electrophysiological data suggest that N1 responses were significantly decreased in amplitude and their latencies were increased in CI users compared to NH controls. In addition, the N1 latencies for the poor CI performers were delayed compared to the good CI performers. The N1 latency for 40 Hz AM was correlated with various speech perception measures. Conclusion Our data suggest that the ACC to AM change provides an objective index of speech perception abilities that can be used to explain some of the variation in speech perception observed among CI users.
Collapse
Affiliation(s)
- Ji-Hye Han
- Communication Sciences Research Center, Cincinnati Childs Hospital Medical Center, Cincinnati, OH, United States.,Laboratory of Brain & Cognitive Sciences for Convergence Medicine, College of Medicine, Hallym University, Chuncheon, South Korea
| | - Andrew Dimitrijevic
- Communication Sciences Research Center, Cincinnati Childs Hospital Medical Center, Cincinnati, OH, United States.,Department Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences Centre, Faculty of Medicine, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
36
|
Silva DMR, Rothe-Neves R, Melges DB. Long-latency event-related responses to vowels: N1-P2 decomposition by two-step principal component analysis. Int J Psychophysiol 2019; 148:93-102. [PMID: 31863852 DOI: 10.1016/j.ijpsycho.2019.11.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Revised: 11/16/2019] [Accepted: 11/18/2019] [Indexed: 11/26/2022]
Abstract
The N1-P2 complex of the auditory event-related potential (ERP) has been used to examine neural activity associated with speech sound perception. Since it is thought to reflect multiple generator processes, its functional significance is difficult to infer. In the present study, a temporospatial principal component analysis (PCA) was used to decompose the N1-P2 response into latent factors underlying covariance patterns in ERP data recorded during passive listening to pairs of successive vowels. In each trial, one of six sounds drawn from an /i/-/e/ vowel continuum was followed either by an identical sound, a different token of the same vowel category, or a token from the other category. Responses were examined as to how they were modulated by within- and across-category vowel differences and by adaptation (repetition suppression) effects. Five PCA factors were identified as corresponding to three well-known N1 subcomponents and two P2 subcomponents. Results added evidence that the N1 peak reflects both generators that are sensitive to spectral information and generators that are not. For later latency ranges, different patterns of sensitivity to vowel quality were found, including category-related effects. Particularly, a subcomponent identified as the Tb wave showed release from adaptation in response to an /i/ followed by an /e/ sound. A P2 subcomponent varied linearly with spectral shape along the vowel continuum, while the other was stronger the closer the vowel was to the category boundary, suggesting separate processing of continuous and category-related information. Thus, the PCA-based decomposition of the N1-P2 complex was functionally meaningful, revealing distinct underlying processes at work during speech sound perception.
Collapse
Affiliation(s)
- Daniel M R Silva
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil
| | - Rui Rothe-Neves
- Phonetics Lab, Faculty of Letters, Federal University of Minas Gerais, Belo Horizonte, Brazil.
| | - Danilo B Melges
- Graduate Program in Electrical Engineering, Department of Electrical Engineering, Federal University of Minas Gerais
| |
Collapse
|
37
|
Shim HJ, Go G, Lee H, Choi SW, Won JH. Influence of Visual Deprivation on Auditory Spectral Resolution, Temporal Resolution, and Speech Perception. Front Neurosci 2019; 13:1200. [PMID: 31780886 PMCID: PMC6851016 DOI: 10.3389/fnins.2019.01200] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 10/23/2019] [Indexed: 11/23/2022] Open
Abstract
We evaluated whether blind subjects have advantages in auditory spectral resolution, temporal resolution, and speech perception in noise compared with sighted subjects. We also compared psychoacoustic performance between early blind (EB) subjects and late blind (LB) subjects. Nineteen EB subjects, 16 LB subjects, and 20 sighted individuals were enrolled. All subjects were right-handed with normal and symmetric hearing thresholds and without cognitive impairments. Three psychoacoustic measurements of the subjects’ right ears were performed via an inserted earphone to determine spectral-ripple discrimination (SRD), temporal modulation detection (TMD), and speech recognition threshold (SRT) in noisy conditions. Acoustic change complex (ACC) responses were recorded during passive listening to standard ripple-inverted ripple stimuli. EB subjects exhibited better SRD than did LB (p = 0.020) and sighted (p = 0.003) subjects. TMD was better in EB (p < 0.001) and LB (p = 0.007) subjects compared with sighted subjects. SRD was positively correlated with the duration of blindness (r = 0.386, p = 0.024). Acoustic change complex data for ripple noise change at the Cz and Fz electrodes showed trends toward significant correlations with the behavioral results. In conclusion, compared with sighted subjects, EB subjects showed advantages in terms of auditory spectral and temporal resolution, while LB subjects showed an advantage in temporal resolution exclusively. These findings suggest that it might take longer for auditory spectral resolution to functionally enhance following visual deprivation compared to temporal resolution. Alternatively, a critical period of very young age may be required for auditory spectral resolution to improve following visual deprivation.
Collapse
Affiliation(s)
- Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Eulji Medical Center, Eulji University School of Medicine, Seoul, South Korea
| | - Geurim Go
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Heirim Lee
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Sung Won Choi
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Jong Ho Won
- Division of ENT, Sleep Disordered Breathing, Respiratory, and Anesthesia, Office of Product Evaluation and Quality, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, United States
| |
Collapse
|
38
|
Abstract
OBJECTIVES The objectives of this study were to measure the effects of level and vowel contrast on the latencies and amplitudes of acoustic change complex (ACC) in the mature auditory system. This was done to establish how the ACC in healthy young adults is affected by these stimulus parameters that could then be used to inform translation of the ACC into a clinical measure for the pediatric population. Another aim was to demonstrate that a normalized amplitude metric, calculated by dividing the ACC amplitude in the vowel contrast condition by the ACC amplitude obtained in a control condition (no vowel change) would demonstrate good sensitivity with respect to perceptual measures of vowel-contrast detection. The premises underlying this research were that: (1) ACC latencies and amplitudes would vary with level, in keeping with principles of an increase in neural synchrony and activity that takes place as a function of increasing stimulus level; (2) ACC latencies and amplitudes would vary with vowel contrast, because cortical auditory evoked potentials are known to be sensitive to the spectro-temporal characteristics of speech. DESIGN Nineteen adults, 14 of them female, with a mean age of 24.2 years (range 20 to 38 years) participated in this study. All had normal-hearing thresholds. Cortical auditory evoked potentials were obtained from all participants in response to synthesized vowel tokens (/a/, /i/, /o/, /u/), presented in a quasi-steady state fashion at a rate of 2/sec in an oddball stimulus paradigm, with a 25% probability of the deviant stimulus. The ACC was obtained in response to the deviant stimulus. All combinations of vowel tokens were tested at 2 stimulus levels: 40 and 70 dBA. In addition, listeners were tested for their ability to detect the vowel contrasts using behavioral methods. RESULTS ACC amplitude varied systematically with level, and test condition (control versus contrast) and vowel token, but ACC latency did not. ACC amplitudes were significantly larger when tested at 70 dBA compared with 40 dBA and for contrast trials compared with control trials at both levels. Amplitude ratios (normalized amplitudes) were largest for contrast pairs in which /a/ was the standard token. The amplitude ratio metric at the individual level demonstrated up to 97% sensitivity with respect to perceptual measures of discrimination. CONCLUSIONS The present study establishes the effects of stimulus level and vowel type on the latency and amplitude of the ACC in the young adult auditory system and supports the amplitude ratio as a sensitive metric for cortical acoustic salience of vowel spectral features. Next steps are to evaluate these methods in infants and children with hearing loss with the long-term goal of its translation into a clinical method for estimating speech feature discrimination.
Collapse
|
39
|
Cortical auditory responses index the contributions of different RMS-level-dependent segments to speech intelligibility. Hear Res 2019; 383:107808. [DOI: 10.1016/j.heares.2019.107808] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/23/2019] [Revised: 09/17/2019] [Accepted: 10/01/2019] [Indexed: 10/25/2022]
|
40
|
Vonck BMD, Lammers MJW, van der Waals M, van Zanten GA, Versnel H. Cortical Auditory Evoked Potentials in Response to Frequency Changes with Varied Magnitude, Rate, and Direction. J Assoc Res Otolaryngol 2019; 20:489-498. [PMID: 31168759 PMCID: PMC6797694 DOI: 10.1007/s10162-019-00726-2] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2018] [Accepted: 05/20/2019] [Indexed: 11/13/2022] Open
Abstract
Recent literature on cortical auditory evoked potentials has focused on correlations with hearing performance with the aim to develop an objective clinical tool. However, cortical responses depend on the type of stimulus and choice of stimulus parameters. This study investigates cortical auditory evoked potentials to sound changes, so-called acoustic change complexes (ACC), and the effects of varying three stimulus parameters. In twelve normal-hearing subjects, ACC waveforms were evoked by presenting frequency changes with varying magnitude, rate, and direction. The N1 amplitude and latency were strongly affected by magnitude, which is known from the literature. Importantly, both of these N1 variables were also significantly affected by both rate and direction of the frequency change. Larger and earlier N1 peaks were evoked by increasing the magnitude and rate of the frequency change and with downward rather than upward direction of the frequency change. The P2 amplitude increased with magnitude and depended, to a lesser extent, on rate of the frequency change while direction had no effect on this peak. The N1–P2 interval was not affected by any of the stimulus parameters. In conclusion, the ACC is most strongly affected by magnitude and also substantially by rate and direction of the change. These stimulus dependencies should be considered in choosing stimuli for ACCs as objective clinical measure of hearing performance.
Collapse
Affiliation(s)
- Bernard M D Vonck
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands
| | - Marc J W Lammers
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands.,BC Rotary Hearing and Balance Centre at St. Paul's Hospital, University of British Columbia, Vancouver, British Columbia, Canada
| | - Marjolijn van der Waals
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands
| | - Gijsbert A van Zanten
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands.,UMC Utrecht Brain Center, Utrecht, The Netherlands
| | - Huib Versnel
- Department of Otorhinolaryngology and Head & Neck Surgery, University Medical Center Utrecht, Room G.02.531, P.O. Box 85500, 3508 GA, Utrecht, The Netherlands. .,UMC Utrecht Brain Center, Utrecht, The Netherlands.
| |
Collapse
|
41
|
Xie Z, Reetzke R, Chandrasekaran B. Machine Learning Approaches to Analyze Speech-Evoked Neurophysiological Responses. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:587-601. [PMID: 30950746 PMCID: PMC6802895 DOI: 10.1044/2018_jslhr-s-astm-18-0244] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Revised: 10/28/2018] [Accepted: 11/26/2018] [Indexed: 05/27/2023]
Abstract
Purpose Speech-evoked neurophysiological responses are often collected to answer clinically and theoretically driven questions concerning speech and language processing. Here, we highlight the practical application of machine learning (ML)-based approaches to analyzing speech-evoked neurophysiological responses. Method Two categories of ML-based approaches are introduced: decoding models, which generate a speech stimulus output using the features from the neurophysiological responses, and encoding models, which use speech stimulus features to predict neurophysiological responses. In this review, we focus on (a) a decoding model classification approach, wherein speech-evoked neurophysiological responses are classified as belonging to 1 of a finite set of possible speech events (e.g., phonological categories), and (b) an encoding model temporal response function approach, which quantifies the transformation of a speech stimulus feature to continuous neural activity. Results We illustrate the utility of the classification approach to analyze early electroencephalographic (EEG) responses to Mandarin lexical tone categories from a traditional experimental design, and to classify EEG responses to English phonemes evoked by natural continuous speech (i.e., an audiobook) into phonological categories (plosive, fricative, nasal, and vowel). We also demonstrate the utility of temporal response function to predict EEG responses to natural continuous speech from acoustic features. Neural metrics from the 3 examples all exhibit statistically significant effects at the individual level. Conclusion We propose that ML-based approaches can complement traditional analysis approaches to analyze neurophysiological responses to speech signals and provide a deeper understanding of natural speech and language processing using ecologically valid paradigms in both typical and clinical populations.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Communication Sciences and Disorders, The University of Texas at Austin
| | - Rachel Reetzke
- Department of Communication Sciences and Disorders, The University of Texas at Austin
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh
| |
Collapse
|
42
|
The Electrically Evoked Auditory Change Complex Evoked by Temporal Gaps Using Cochlear Implants or Auditory Brainstem Implants in Children With Cochlear Nerve Deficiency. Ear Hear 2019; 39:482-494. [PMID: 28968281 DOI: 10.1097/aud.0000000000000498] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study aimed to (1) establish the feasibility of measuring the electrically evoked auditory change complex (eACC) in response to temporal gaps in children with cochlear nerve deficiency (CND) who are using cochlear implants (CIs) and/or auditory brainstem implants (ABIs); and (2) explore the association between neural encoding of, and perceptual sensitivity to, temporal gaps in these patients. DESIGN Study participants included 5 children (S1 to S5) ranging in age from 3.8 to 8.2 years (mean: 6.3 years) at the time of testing. All subjects were unilaterally implanted with a Nucleus 24M ABI due to CND. For each subject, two or more stimulating electrodes of the ABI were tested. S2, S3, and S5 previously received a CI in the contralateral ear. For these 3 subjects, at least two stimulating electrodes of their CIs were also tested. For electrophysiological measures, the stimulus was an 800-msec biphasic pulse train delivered to individual electrodes at the maximum comfortable level (C level). The electrically evoked responses, including the onset response and the eACC, were measured for two stimulation conditions. In the standard condition, the 800-msec pulse train was delivered uninterrupted to individual stimulating electrodes. In the gapped condition, a temporal gap was inserted into the pulse train after 400 msec of stimulation. Gap durations tested in this study ranged from 2 up to 128 msec. The shortest gap that could reliably evoke the eACC was defined as the objective gap detection threshold (GDT). For behavioral GDT measures, the stimulus was a 500-msec biphasic pulse train presented at the C level. The behavioral GDT was measured for individual stimulating electrodes using a one-interval, two-alternative forced-choice procedure. RESULTS The eACCs to temporal gaps were recorded successfully in all subjects for at least one stimulating electrode using either the ABI or the CI. Objective GDTs showed intersubject variations, as well as variations across stimulating electrodes of the ABI or the CI within each subject. Behavioral GDTs were measured for one ABI electrode in S2 and for multiple ABI and CI electrodes in S5. All other subjects could not complete the task. S5 showed smaller behavioral GDTs for CI electrodes than those measured for ABI electrodes. One CI and two ABI electrodes in S5 showed comparable objective and behavioral GDTs. In contrast, one CI and two ABI electrodes in S5 and one ABI electrode in S2 showed measurable behavioral GDTs but no identifiable eACCs. CONCLUSIONS The eACCs to temporal gaps were recorded in children with CND using either ABIs or CIs. Both objective and behavioral GDTs showed inter- and intrasubject variations. Consistency between results of eACC recordings and psychophysical measures of GDT was observed for some but not all ABI or CI electrodes in these subjects.
Collapse
|
43
|
Morris DJ, Tøndering J, Lindgren M. Electrophysiological and behavioral measures of some speech contrasts in varied attention and noise. Hear Res 2019; 373:1-9. [PMID: 30553033 DOI: 10.1016/j.heares.2018.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 11/27/2018] [Accepted: 12/04/2018] [Indexed: 10/27/2022]
Abstract
This paper investigates the salience of speech contrasts in noise, in relation to how listening attention affects scalp-recorded cortical responses. The contrasts that were examined with consonant-vowel syllables, were place of articulation, vowel length and voice-onset time (VOT) and our analysis focuses on the correspondence between the effect of attention on the electrophysiology and the decrement in behavioral results when noise was added to the stimuli. Normal-hearing subjects (n = 20) performed closed-set syllable identification in no noise, 0, 4 and 8 dB signal-noise ratio (SNR). Identification in noise decreased markedly for place of articulation, moderately for vowel length and marginally for VOT. The same syllables were used in two electrophysiology conditions, where subjects attended to the stimuli, and also while their attention was diverted to a visual discrimination task. Differences in global field power between the attention conditions from each contrast showed that that the effect of attention was negligible for place of articulation. They implied offset encoding of vowel length and were early (starting at 117 ms), and of high amplitude (>3 μV) for VOT. There were significant correlations between the difference in syllable identification in no noise and 0 dB SNR and the electrophysiology results between attention conditions for the VOT contrast. Comparison of the two attention conditions with microstate analysis showed a significant difference in the duration of microstate class D. These results show differential integration of attention and syllable processing according to speech contrast and they suggest that there is correspondence between the salience of a contrast in noise and the effect of attention on the evoked electrical response.
Collapse
Affiliation(s)
- David Jackson Morris
- University of Copenhagen, Department of Nordic Studies and Linguistics, Speech Pathology and Audiology, Emil Holms Kanal 2, 2300, Copenhagen, Denmark; Lund University, Humanities Laboratory, Helgonabacken 12, Lund, 22100, Sweden.
| | - John Tøndering
- University of Copenhagen, Department of Nordic Studies and Linguistics, Speech Pathology and Audiology, Emil Holms Kanal 2, 2300, Copenhagen, Denmark.
| | - Magnus Lindgren
- Lund University Department of Psychology, Paradisgatan 5, Lund, 22100, Sweden.
| |
Collapse
|
44
|
Age-related differences in Voice-Onset-Time in Polish language users: An ERP study. Acta Psychol (Amst) 2019; 193:18-29. [PMID: 30580059 DOI: 10.1016/j.actpsy.2018.12.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Revised: 11/15/2018] [Accepted: 12/06/2018] [Indexed: 11/23/2022] Open
Abstract
Using the Mismatch Negativity (MMN) paradigm we investigated for the first time cortical responses to consonant - vowel (CV) syllables differing in Voice-Onset-Time (VOT) for Polish, a member of the Slavic group of languages. The study aimed at testing age-related effects on different ERP responses in young (20-30 years of age) and elderly (60-68 years) native Polish speakers. Participants were presented with a sequence of voiced and voiceless stop CV syllables /to/ and /do/ with different VOT values (-100 ms, -70 ms, -30 ms, -20 ms, +20 ms, +50 ms). We analyzed MMN and P1, N1, N1', P2, N2 components. Our results showed an age-related decline in voicing perception in all tested ERP components. This decline could be explained in relation to a general slowing in neural processing with advancing age and may be associated with difficulties in temporal- and spectral-information processing in elderly people. Our findings revealed also that specific features of Slavic languages influence ERP morphology in a different way than reported in the literature for aspirating languages.
Collapse
|
45
|
Billings CJ, Madsen BM. A perspective on brain-behavior relationships and effects of age and hearing using speech-in-noise stimuli. Hear Res 2018; 369:90-102. [PMID: 29661615 PMCID: PMC6636926 DOI: 10.1016/j.heares.2018.03.024] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/02/2017] [Revised: 03/06/2018] [Accepted: 03/28/2018] [Indexed: 10/17/2022]
Abstract
Understanding speech in background noise is often more difficult for individuals who are older and have hearing impairment than for younger, normal-hearing individuals. In fact, speech-understanding abilities among older individuals with hearing impairment varies greatly. Researchers have hypothesized that some of that variability can be explained by how the brain encodes speech signals in the presence of noise, and that brain measures may be useful for predicting behavioral performance in difficult-to-test patients. In a series of experiments, we have explored the effects of age and hearing impairment in both brain and behavioral domains with the goal of using brain measures to improve our understanding of speech-in-noise difficulties. The behavioral measures examined showed effect sizes for hearing impairment that were 6-10 dB larger than the effects of age when tested in steady-state noise, whereas electrophysiological age effects were similar in magnitude to those of hearing impairment. Both age and hearing status influence neural responses to speech as well as speech understanding in background noise. These effects can in turn be modulated by other factors, such as the characteristics of the background noise itself. Finally, the use of electrophysiology to predict performance on receptive speech-in-noise tasks holds promise, demonstrating root-mean-square prediction errors as small as 1-2 dB. An important next step in this field of inquiry is to sample the aging and hearing impairment variables continuously (rather than categorically) - across the whole lifespan and audiogram - to improve effect estimates.
Collapse
Affiliation(s)
- Curtis J Billings
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA; Department of Otolaryngology, Oregon Health & Science University, 3181 SW Sam Jackson Park Road, Portland, OR 97239, USA.
| | - Brandon M Madsen
- National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, 3710 SW US Veterans Hospital Road (NCRAR), Portland, OR 97239, USA
| |
Collapse
|
46
|
Liang C, Houston LM, Samy RN, Abedelrehim LMI, Zhang F. Cortical Processing of Frequency Changes Reflected by the Acoustic Change Complex in Adult Cochlear Implant Users. Audiol Neurootol 2018; 23:152-164. [PMID: 30300882 DOI: 10.1159/000492170] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Accepted: 07/16/2018] [Indexed: 11/19/2022] Open
Abstract
The purpose of this study was to examine neural substrates of frequency change detection in cochlear implant (CI) recipients using the acoustic change complex (ACC), a type of cortical auditory evoked potential elicited by acoustic changes in an ongoing stimulus. A psychoacoustic test and electroencephalographic recording were administered in 12 postlingually deafened adult CI users. The stimuli were pure tones containing different magnitudes of upward frequency changes. Results showed that the frequency change detection threshold (FCDT) was 3.79% in the CI users, with a large variability. The ACC N1' latency was significantly correlated with the FCDT and the clinically collected speech perception score. The results suggested that the ACC evoked by frequency changes can serve as a useful objective tool in assessing frequency change detection capability and predicting speech perception performance in CI users.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China
| | - Lisa M Houston
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Ravi N Samy
- Department of Otolaryngology, Head and Neck Surgery, University of Cincinnati, Cincinnati, Ohio, USA
| | - Lamiaa Mohamed Ibrahim Abedelrehim
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio, USA.,Audiology Department, Sohag Faculty of Medicine, Sohag University, Sohag, Egypt
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, Ohio,
| |
Collapse
|
47
|
Abstract
The problems concerning the registration of late latency auditory responses to electric stimulation in the patients wearing cochlear implants are considered. The renewed interest to this class of evoked potentials is due to unexplained differences in the results of cochlear implantation in the patients with the similar audiological data, etiology, age and the history of deafness as well as cochlear implant surgery in children of first years of life and the extended possibilities for speech processor programming. It is maintained that the advantages of this method include the possibility to objectively evaluate the ability of brain to detect and discriminate between different stimulus characteristics, such as loudness differences, temporal changes or speech tokens. This method is of great clinical significance for the electrophysiological monitoring of brain plasticity and documentation of the clinical effectiveness of different rehabilitation methods. Based on our own experimental and clinical results and the literature data, we consider the application of different electrically evoked late latency potentials for the monitoring of the auditory pathway maturation dynamics during the electric stimulation as well for the estimation of the effectiveness of cochlear implantation. It is concluded that the longer duration of deafness and later age at implantation result in immature morphology and delayed peak latencies and that patients with shorter latencies and higher amplitudes have better speech perception. The use of different classes of electrically evoked responses of auditory cortex could provide the objective control of the effectiveness of the rehabilitative measured in the children following cochlear implantation.
Collapse
Affiliation(s)
- G A Tavartkiladze
- Russian Research Centre for Audiology and Hearing Rehabilitation, Russian Medico-Biological Agency, Moscow, Russia, 117513; Russian Medical Academy of Continuous Professional Education, Ministry of Health of the Russian Federation, Moscow, Russia, 123395
| |
Collapse
|
48
|
Uhler KM, Hunter SK, Tierney E, Gilley PM. The relationship between mismatch response and the acoustic change complex in normal hearing infants. Clin Neurophysiol 2018; 129:1148-1160. [PMID: 29635099 DOI: 10.1016/j.clinph.2018.02.132] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Revised: 02/19/2018] [Accepted: 02/24/2018] [Indexed: 11/27/2022]
Abstract
OBJECTIVE To examine the utility of the mismatch response (MMR) and acoustic change complex (ACC) for assessing speech discrimination in infants. METHODS Continuous EEG was recorded during sleep from 48 (24 male, 20 female) normally hearing aged 1.77 to -4.57 months in response to two auditory discrimination tasks. ACC was recorded in response to a three-vowel sequence (/i/-/a/-/i/). MMR was recorded in response to a standard vowel, /a/, (probability 85%), and to a deviant vowel, /i/, (probability of 15%). A priori comparisons included: age, sex, and sleep state. These were conducted separately for each of the three bandpass filter settings were compared (1-18, 1-30, and 1-40 Hz). RESULTS A priori tests revealed no differences in MMR or ACC for age, sex, or sleep state for any of the three filter settings. ACC and MMR responses were prominently observed in all 44 sleeping infants (data from four infants were excluded). Significant differences observed for ACC were to the onset and offset of stimuli. However, neither group nor individual differences were observed to changes in speech stimuli in the ACC. MMR revealed two prominent peaks occurring at the stimulus onset and at the stimulus offset. Permutation t-tests revealed significant differences between the standard and deviant stimuli for both the onset and offset MMR peaks (p < 0.01). The 1-18 Hz filter setting revealed significant differences for all participants in the MMR paradigm. CONCLUSION Both ACC and MMR responses were observed to auditory stimulation suggesting that infants perceive and process speech information even during sleep. Significant differences between the standard and deviant responses were observed in the MMR, but not ACC paradigm. These findings suggest that the MMR is sensitive to detecting auditory/speech discrimination processing. SIGNIFICANCE This paper identified that MMR can be used to identify discrimination in normal hearing infants. This suggests that MMR has potential for use in infants with hearing loss to validate hearing aid fittings.
Collapse
Affiliation(s)
- Kristin M Uhler
- University of Colorado Denver, Departments of Physical Medicine and Rehabilitation, Otolaryngology, and Psychiatry, Children's Hospital Colorado, Aurora, CO, USA.
| | - Sharon K Hunter
- University of Colorado Denver, Departments of Psychiatry and Pediatrics, Aurora, CO, USA
| | - Elyse Tierney
- University of Colorado Denver, Departments of Psychiatry and Pediatrics, Aurora, CO, USA
| | - Phillip M Gilley
- University of Colorado, Boulder, Institute of Cognitive Science, Neurodynamics Laboratory, Boulder, CO, USA
| |
Collapse
|
49
|
Tan CT, Martin BA, Svirsky MA. A potential neurophysiological correlate of electric-acoustic pitch matching in adult cochlear implant users: Pilot data. Cochlear Implants Int 2018; 19:198-209. [PMID: 29508662 DOI: 10.1080/14670100.2018.1442126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
The overall goal of this study was to identify an objective physiological correlate of electric-acoustic pitch matching in unilaterally implanted cochlear implant (CI) participants with residual hearing in the non-implanted ear. Electrical and acoustic stimuli were presented in a continuously alternating fashion across ears. The acoustic stimulus and the electrical stimulus were either matched or mismatched in pitch. Auditory evoked potentials were obtained from nine CI users. Results indicated that N1 latency was stimulus-dependent, decreasing when the acoustic frequency of the tone presented to the non-implanted ear was increased. More importantly, there was an additional decrease in N1 latency in the pitch-matched condition. These results indicate the potential utility of N1 latency as an index of pitch matching in CI users.
Collapse
Affiliation(s)
- Chin-Tuan Tan
- a Department of Electrical and Computer Engineering, School of Behavioral and Brain Science (Callier Center for Communication Disorders) , University of Texas at Dallas , Richardson , TX , USA.,b Program in Speech-Language-Hearing Sciences and Program in Audiology, Graduate Center , City University of New York , New York , NY , USA
| | - Brett A Martin
- b Program in Speech-Language-Hearing Sciences and Program in Audiology, Graduate Center , City University of New York , New York , NY , USA
| | - Mario A Svirsky
- c Department of Otolaryngology , New York University , New York , NY , USA
| |
Collapse
|
50
|
Kang S, Woo J, Park H, Brown CJ, Hong SH, Moon IJ. Objective Test of Cochlear Dead Region: Electrophysiologic Approach using Acoustic Change Complex. Sci Rep 2018; 8:3645. [PMID: 29483598 PMCID: PMC5832147 DOI: 10.1038/s41598-018-21754-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Accepted: 02/09/2018] [Indexed: 11/09/2022] Open
Abstract
The goal of this study was to develop an objective and neurophysiologic method of identifying the presence of cochlear dead region (CDR) by combining acoustic change complex (ACC) responses with threshold-equalizing noise (TEN) test. The goal of the first study was to confirm whether ACC could be evoked with TEN stimuli and to also optimize the test conditions. The goal of the second study was to determine whether the TEN-ACC test is capable of detecting CDR(s). The ACC responses were successfully recorded from all study participants. Both behaviorally and electrophysiologically obtained masked thresholds (TEN threshold and TEN-ACC threshold) were similar and below 10 and 12 dB SNR in NH listeners, respectively. HI listeners were divided into HI (non-CDR) and CDR groups based on the behavioral TEN test. For the non-CDR group, TEN-ACC thresholds were below 12 dB which were similar to NH listeners. However, for the CDR group, TEN-ACC thresholds were significantly higher (≥12 dB SNR) than those in the NH and HI groups, indicating that CDR(s) can be objectively detected using the ACC. Results of this study demonstrate that it is possible to detect the presence of CDR using an electrophysiologic method.
Collapse
Affiliation(s)
- Soojin Kang
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.,School of Electrical Engineering, Biomedical Engineering, University of Ulsan, Ulsan, Korea
| | - Jihwan Woo
- School of Electrical Engineering, Biomedical Engineering, University of Ulsan, Ulsan, Korea
| | - Heesung Park
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea
| | - Carolyn J Brown
- Departments of Speech Pathology and Audiology, University of Iowa, Iowa City, Iowa, USA
| | - Sung Hwa Hong
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Changwon Hospital, Sungkyunkwan University School of Medicine, Changwon, Korea.
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul, Korea.
| |
Collapse
|