1
|
Jeon EK, Driscoll V, Mussoi BS, Scheperle R, Guthe E, Gfeller K, Abbas PJ, Brown CJ. Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training. Ear Hear 2024:00003446-990000000-00316. [PMID: 39044323 DOI: 10.1097/aud.0000000000001569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures. DESIGN This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics. RESULTS In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training. CONCLUSIONS This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Virginia Driscoll
- Department of Music Education and Therapy, East Carolina University, Greenville, North Carolina, USA
| | - Bruna S Mussoi
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Rachel Scheperle
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Emily Guthe
- Department of Music Therapy, Cleveland State University, Cleveland, Ohio, USA
| | - Kate Gfeller
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Paul J Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Carolyn J Brown
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Meehan S, Adank ML, van der Schroeff MP, Vroegop JL. A systematic review of acoustic change complex (ACC) measurements and applicability in children for the assessment of the neural capacity for sound and speech discrimination. Hear Res 2024; 451:109090. [PMID: 39047579 DOI: 10.1016/j.heares.2024.109090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 07/11/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024]
Abstract
OBJECTIVE The acoustic change complex (ACC) is a cortical auditory evoked potential (CAEP) and can be elicited by a change in an otherwise continuous sound. The ACC has been highlighted as a promising tool in the assessment of sound and speech discrimination capacity, and particularly for difficult-to-test populations such as infants with hearing loss, due to the objective nature of ACC measurements. Indeed, there is a pressing need to develop further means to accurately and thoroughly establish the hearing status of children with hearing loss, to help guide hearing interventions in a timely manner. Despite the potential of the ACC method, ACC measurements remain relatively rare in a standard clinical settings. The objective of this study was to perform an up-to-date systematic review on ACC measurements in children, to provide greater clarity and consensus on the possible methodologies, applications, and performance of this technique, and to facilitate its uptake in relevant clinical settings. DESIGN Original peer-reviewed articles conducting ACC measurements in children (< 18 years). Data were extracted and summarised for: (1) participant characteristics; (2) ACC methods and auditory stimuli; (3) information related to the performance of the ACC technique; (4) ACC measurement outcomes, advantages, and challenges. The systematic review was conducted using PRISMA guidelines for reporting and the methodological quality of included articles was assessed. RESULTS A total of 28 studies were identified (9 infant studies). Review results show that ACC responses can be measured in infants (from < 3 months), and there is evidence of age-dependency, including increased robustness of the ACC response with increasing childhood age. Clinical applications include the measurement of the neural capacity for speech and non-speech sound discrimination in children with hearing loss, auditory neuropathy spectrum disorder (ANSD) and central auditory processing disorder (CAPD). Additionally, ACCs can be recorded in children with hearing aids, auditory brainstem implants, and cochlear implants, and ACC results may guide hearing intervention/rehabilitation strategies. The review identified that the time taken to perform ACC measurements was often lengthy; the development of more efficient ACC test procedures for children would be beneficial. Comparisons between objective ACC measurements and behavioural measures of sound discrimination showed significant correlations for some, but not all, included studies. CONCLUSIONS ACC measurements of the neural capacity to discriminate between speech and non-speech sounds are feasible in infants and children, and a wide range of possible clinical applications exist, although more time-efficient procedures would be advantageous for clinical uptake. A consideration of age and maturational effects is recommended, and further research is required to investigate the relationship between objective ACC measures and behavioural measures of sound and speech perception for effective clinical implementation.
Collapse
Affiliation(s)
- Sarah Meehan
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands.
| | - Marloes L Adank
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Marc P van der Schroeff
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| | - Jantien L Vroegop
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus Medical Center, Rotterdam, the Netherlands
| |
Collapse
|
3
|
Kries J, De Clercq P, Gillis M, Vanthornhout J, Lemmens R, Francart T, Vandermosten M. Exploring neural tracking of acoustic and linguistic speech representations in individuals with post-stroke aphasia. Hum Brain Mapp 2024; 45:e26676. [PMID: 38798131 PMCID: PMC11128780 DOI: 10.1002/hbm.26676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 03/04/2024] [Accepted: 03/21/2024] [Indexed: 05/29/2024] Open
Abstract
Aphasia is a communication disorder that affects processing of language at different levels (e.g., acoustic, phonological, semantic). Recording brain activity via Electroencephalography while people listen to a continuous story allows to analyze brain responses to acoustic and linguistic properties of speech. When the neural activity aligns with these speech properties, it is referred to as neural tracking. Even though measuring neural tracking of speech may present an interesting approach to studying aphasia in an ecologically valid way, it has not yet been investigated in individuals with stroke-induced aphasia. Here, we explored processing of acoustic and linguistic speech representations in individuals with aphasia in the chronic phase after stroke and age-matched healthy controls. We found decreased neural tracking of acoustic speech representations (envelope and envelope onsets) in individuals with aphasia. In addition, word surprisal displayed decreased amplitudes in individuals with aphasia around 195 ms over frontal electrodes, although this effect was not corrected for multiple comparisons. These results show that there is potential to capture language processing impairments in individuals with aphasia by measuring neural tracking of continuous speech. However, more research is needed to validate these results. Nonetheless, this exploratory study shows that neural tracking of naturalistic, continuous speech presents a powerful approach to studying aphasia.
Collapse
Affiliation(s)
- Jill Kries
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
- Department of PsychologyStanford UniversityStanfordCaliforniaUSA
| | - Pieter De Clercq
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Marlies Gillis
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Jonas Vanthornhout
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Robin Lemmens
- Experimental Neurology, Department of NeurosciencesKU LeuvenLeuvenBelgium
- Laboratory of Neurobiology, VIB‐KU Leuven Center for Brain and Disease ResearchLeuvenBelgium
- Department of NeurologyUniversity Hospitals LeuvenLeuvenBelgium
| | - Tom Francart
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| | - Maaike Vandermosten
- Experimental Oto‐Rhino‐Laryngology, Department of Neurosciences, Leuven Brain InstituteKU LeuvenLeuvenBelgium
| |
Collapse
|
4
|
Hu H, Ewert SD, Kollmeier B, Vickers D. Rate dependent neural responses of interaural-time-difference cues in fine-structure and envelope. PeerJ 2024; 12:e17104. [PMID: 38680894 PMCID: PMC11055513 DOI: 10.7717/peerj.17104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 02/22/2024] [Indexed: 05/01/2024] Open
Abstract
Advancements in cochlear implants (CIs) have led to a significant increase in bilateral CI users, especially among children. Yet, most bilateral CI users do not fully achieve the intended binaural benefit due to potential limitations in signal processing and/or surgical implant positioning. One crucial auditory cue that normal hearing (NH) listeners can benefit from is the interaural time difference (ITD), i.e., the time difference between the arrival of a sound at two ears. The ITD sensitivity is thought to be heavily relying on the effective utilization of temporal fine structure (very rapid oscillations in sound). Unfortunately, most current CIs do not transmit such true fine structure. Nevertheless, bilateral CI users have demonstrated sensitivity to ITD cues delivered through envelope or interaural pulse time differences, i.e., the time gap between the pulses delivered to the two implants. However, their ITD sensitivity is significantly poorer compared to NH individuals, and it further degrades at higher CI stimulation rates, especially when the rate exceeds 300 pulse per second. The overall purpose of this research thread is to improve spatial hearing abilities in bilateral CI users. This study aims to develop electroencephalography (EEG) paradigms that can be used with clinical settings to assess and optimize the delivery of ITD cues, which are crucial for spatial hearing in everyday life. The research objective of this article was to determine the effect of CI stimulation pulse rate on the ITD sensitivity, and to characterize the rate-dependent degradation in ITD perception using EEG measures. To develop protocols for bilateral CI studies, EEG responses were obtained from NH listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD (ITDFS) or envelope ITD (ITDENV). Multiple EEG responses were analyzed, which included the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by ITDENV changes were significantly smaller or absent compared to those elicited by ITDFS changes. The ACC morphologies evoked by ITDFS changes were similar to onset and offset CAEPs, although the peak latencies were longest for ACC responses and shortest for offset CAEPs. The high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40 > 160 > 80> 320 Hz ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz ASSRs in the time-frequency domain for SAM tones with a low carrier frequency.
Collapse
Affiliation(s)
- Hongmei Hu
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Stephan D. Ewert
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Birger Kollmeier
- Department of Medical Physics and Acoustics, Carl von Ossietzky University of Oldenburg, Oldenburg, Germany
| | - Deborah Vickers
- SOUND Lab, Cambridge Hearing Group, Department of Clinical Neuroscience, Cambridge University, Cambridge, United Kingdom
| |
Collapse
|
5
|
Barrozo TF, Silva LAF, Matas CG, Wertzner HF. The Relationship between Speech Sound Disorder and Cortical Auditory Evoked Potential. Folia Phoniatr Logop 2024:1-15. [PMID: 38615664 DOI: 10.1159/000538849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Accepted: 04/09/2024] [Indexed: 04/16/2024] Open
Abstract
INTRODUCTION Speech sound disorder (SSD) is a speech and language disorder associated with difficulties in motor production, perception, and phonological representation of sounds and speech segments. Since auditory perception has a fundamental role in forming and organizing sound representation for its recognition, studies that evaluate the cortical processing of sounds are required. Thus, the present study aimed to verify the relation between SSD severity measured by the percentage of correct consonants (PCCs) with the cortical auditory evoked potentials (CAEPs) using speech stimulus. METHODS Twenty-nine children with normal hearing participated in this research and were grouped into three groups by SSD level measured by the PCC index. In addition, the groups were subdivided according to the children's age group: between 60-71 months, 72-83 months, and 83-94 months. The CAEP with speech stimulus was carried out in all children. RESULTS Older children had longer P1 and N1 latencies. In P2 latency, there was an interference of age only in the severe group. The N2 latency was affected by age, where older children had longer latency. CONCLUSION The amplitude of CAEP has not suffered any interference with the age, or severity of SSD. For the latency, older children generally presented longer averages than younger ones.
Collapse
Affiliation(s)
- Tatiane Faria Barrozo
- Department of Physiotherapy, Audiology and Speech Therapy, and Occupational Therapy, University of São Paulo, São Paulo, Brazil
| | | | - Carla Gentile Matas
- Department of Physiotherapy, Audiology and Speech Therapy, and Occupational Therapy, University of São Paulo, São Paulo, Brazil
| | - Haydée Fiszbein Wertzner
- Department of Physiotherapy, Audiology and Speech Therapy, and Occupational Therapy, University of São Paulo, São Paulo, Brazil
| |
Collapse
|
6
|
Brilliant, Yaar-Soffer Y, Herrmann CS, Henkin Y, Kral A. Theta and alpha oscillatory signatures of auditory sensory and cognitive loads during complex listening. Neuroimage 2024; 289:120546. [PMID: 38387743 DOI: 10.1016/j.neuroimage.2024.120546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/07/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024] Open
Abstract
The neuronal signatures of sensory and cognitive load provide access to brain activities related to complex listening situations. Sensory and cognitive loads are typically reflected in measures like response time (RT) and event-related potentials (ERPs) components. It's, however, strenuous to distinguish the underlying brain processes solely from these measures. In this study, along with RT- and ERP-analysis, we performed time-frequency analysis and source localization of oscillatory activity in participants performing two different auditory tasks with varying degrees of complexity and related them to sensory and cognitive load. We studied neuronal oscillatory activity in both periods before the behavioral response (pre-response) and after it (post-response). Robust oscillatory activities were found in both periods and were differentially affected by sensory and cognitive load. Oscillatory activity under sensory load was characterized by decrease in pre-response (early) theta activity and increased alpha activity. Oscillatory activity under cognitive load was characterized by increased theta activity, mainly in post-response (late) time. Furthermore, source localization revealed specific brain regions responsible for processing these loads, such as temporal and frontal lobe, cingulate cortex and precuneus. The results provide evidence that in complex listening situations, the brain processes sensory and cognitive loads differently. These neural processes have specific oscillatory signatures and are long lasting, extending beyond the behavioral response.
Collapse
Affiliation(s)
- Brilliant
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany.
| | - Y Yaar-Soffer
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - C S Herrmann
- Experimental Psychology Division, University of Oldenburg, 26111 Oldenburg, Germany
| | - Y Henkin
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - A Kral
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany
| |
Collapse
|
7
|
Bramhall NF, McMillan GP. Perceptual Consequences of Cochlear Deafferentation in Humans. Trends Hear 2024; 28:23312165241239541. [PMID: 38738337 DOI: 10.1177/23312165241239541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
Cochlear synaptopathy, a form of cochlear deafferentation, has been demonstrated in a number of animal species, including non-human primates. Both age and noise exposure contribute to synaptopathy in animal models, indicating that it may be a common type of auditory dysfunction in humans. Temporal bone and auditory physiological data suggest that age and occupational/military noise exposure also lead to synaptopathy in humans. The predicted perceptual consequences of synaptopathy include tinnitus, hyperacusis, and difficulty with speech-in-noise perception. However, confirming the perceptual impacts of this form of cochlear deafferentation presents a particular challenge because synaptopathy can only be confirmed through post-mortem temporal bone analysis and auditory perception is difficult to evaluate in animals. Animal data suggest that deafferentation leads to increased central gain, signs of tinnitus and abnormal loudness perception, and deficits in temporal processing and signal-in-noise detection. If equivalent changes occur in humans following deafferentation, this would be expected to increase the likelihood of developing tinnitus, hyperacusis, and difficulty with speech-in-noise perception. Physiological data from humans is consistent with the hypothesis that deafferentation is associated with increased central gain and a greater likelihood of tinnitus perception, while human data on the relationship between deafferentation and hyperacusis is extremely limited. Many human studies have investigated the relationship between physiological correlates of deafferentation and difficulty with speech-in-noise perception, with mixed findings. A non-linear relationship between deafferentation and speech perception may have contributed to the mixed results. When differences in sample characteristics and study measurements are considered, the findings may be more consistent.
Collapse
Affiliation(s)
- Naomi F Bramhall
- VA National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
- Department of Otolaryngology/Head & Neck Surgery, Oregon Health & Science University, Portland, OR, USA
| | - Garnett P McMillan
- VA National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, Portland, OR, USA
| |
Collapse
|
8
|
Bramhall NF, Theodoroff SM, McMillan GP, Kampel SD, Buran BN. Associations Between Physiological Correlates of Cochlear Synaptopathy and Tinnitus in a Veteran Population. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4635-4652. [PMID: 37889209 DOI: 10.1044/2023_jslhr-23-00234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
PURPOSE Animal models and human temporal bones indicate that noise exposure is a risk factor for cochlear synaptopathy, a possible etiology of tinnitus. Veterans are exposed to high levels of noise during military service. Therefore, synaptopathy may explain the high rates of noise-induced tinnitus among Veterans. Although synaptopathy cannot be directly evaluated in living humans, animal models indicate that several physiological measures are sensitive to synapse loss, including the auditory brainstem response (ABR), the middle ear muscle reflex (MEMR), and the envelope following response (EFR). The purpose of this study was to determine whether tinnitus is associated with reductions in physiological correlates of synaptopathy that parallel animal studies. METHOD Participants with normal audiograms were grouped according to Veteran status and tinnitus report (Veterans with tinnitus, Veterans without tinnitus, and non-Veteran controls). The effects of being a Veteran with tinnitus on ABR, MEMR, and EFR measurements were independently modeled using Bayesian regression analysis. RESULTS Modeled point estimates of MEMR and EFR magnitude showed reductions for Veterans with tinnitus compared with non-Veterans, with the most evident reduction observed for the EFR. Two different approaches were used to provide context for the Veteran tinnitus effect on the EFR by comparing to age-related reductions in EFR magnitude and synapse numbers observed in previous studies. These analyses suggested that EFR magnitude/synapse counts were reduced in Veterans with tinnitus by roughly the same amount as over 20 years of aging. CONCLUSION These findings suggest that cochlear synaptopathy may contribute to tinnitus perception in noise-exposed Veterans. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24347761.
Collapse
Affiliation(s)
- Naomi F Bramhall
- VA RR&D National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, OR
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Sarah M Theodoroff
- VA RR&D National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, OR
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland
| | - Garnett P McMillan
- VA RR&D National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, OR
| | - Sean D Kampel
- VA RR&D National Center for Rehabilitative Auditory Research, Veterans Affairs Portland Health Care System, OR
| | - Brad N Buran
- Department of Otolaryngology-Head & Neck Surgery, Oregon Health & Science University, Portland
| |
Collapse
|
9
|
Oliveira YMD, Calderaro VG, Massuda ET, Zanchetta S, Simões HDO. Does the Number of Stimuli Influence the Formation of the Endogenous Components of the Event-Related Auditory Evoked Potentials? Int Arch Otorhinolaryngol 2023; 27:e636-e644. [PMID: 37876687 PMCID: PMC10593534 DOI: 10.1055/s-0042-1759605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Accepted: 09/12/2022] [Indexed: 10/26/2023] Open
Abstract
Introduction The number of stimuli is important to determine the quality of auditory evoked potential records. However, there is no consensus on that number in studies, especially in the sample studied. Objectives To investigate the influence of the number of rare stimuli on forming N2 and P3 components, with different types of acoustic stimuli. Methods Cross-sectional, descriptive, comparative study, approved by the ethics committee of the institution. The sample comprised 20 normal hearing adults of both sexes, aged 18 to 29 years old, with normal scores in the mental state examination and auditory processing skills. The event-related auditory evoked potentials were performed with nonverbal (1 kHz versus 2 kHz) and verbal stimuli (/BA/ versus /DA/). The number of rare stimuli varied randomly in the recordings, with 10, 20, 30, 40, and 50 presentations. Results P3 latency was significantly higher for nonverbal stimuli with 50 rare stimuli. N2 latency did not show any difference between the type and number of stimuli. The absolute P3 and N2-P3 amplitudes showed significant differences for both types of stimuli, with higher amplitude for 10 rare stimuli, in contrast with the other ones. The linear tendency test indicated significance only for the amplitude - as the number of rare stimuli increased, the amplitude tended to decrease. Conclusion The components were identifiable in the different numbers of rare stimuli and types of stimuli. The P3 and N2-P3 latency and amplitude increased with fewer verbal and nonverbal stimuli. Recording protocols must consider the number of rare stimuli.
Collapse
Affiliation(s)
- Yorran Marques de Oliveira
- Speech-language pathology and audiology Division, Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto Ringgold Standard Institution, Ciências da Saúde, Ribeirão Preto, SP, Brazil
| | - Victor Goiris Calderaro
- Speech-language pathology and audiology Division, Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto Ringgold Standard Institution, Ciências da Saúde, Ribeirão Preto, SP, Brazil
| | - Eduardo Tanaka Massuda
- Department of Ophthalmology, Otolaryngology and Head and Neck Surgery, Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto Ringgold Standard Institution, Ribeirão Preto, SP, Brazil
| | - Sthella Zanchetta
- Speech-language pathology and audiology Division, Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto Ringgold Standard Institution, Ciências da Saúde, Ribeirão Preto, SP, Brazil
| | - Humberto de Oliveira Simões
- Speech-language pathology and audiology Sector, Universidade de São Paulo, Hospital das Clínicas da Faculdade de Medicina de Ribeirão Preto Ringgold Standard Institution, Ribeirão Preto, SP, Brazil
| |
Collapse
|
10
|
Berger JI, Gander PE, Kim S, Schwalje AT, Woo J, Na YM, Holmes A, Hong JM, Dunn CC, Hansen MR, Gantz BJ, McMurray B, Griffiths TD, Choi I. Neural Correlates of Individual Differences in Speech-in-Noise Performance in a Large Cohort of Cochlear Implant Users. Ear Hear 2023; 44:1107-1120. [PMID: 37144890 PMCID: PMC10426791 DOI: 10.1097/aud.0000000000001357] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2021] [Accepted: 01/11/2023] [Indexed: 05/06/2023]
Abstract
OBJECTIVES Understanding speech-in-noise (SiN) is a complex task that recruits multiple cortical subsystems. Individuals vary in their ability to understand SiN. This cannot be explained by simple peripheral hearing profiles, but recent work by our group ( Kim et al. 2021 , Neuroimage ) highlighted central neural factors underlying the variance in SiN ability in normal hearing (NH) subjects. The present study examined neural predictors of SiN ability in a large cohort of cochlear-implant (CI) users. DESIGN We recorded electroencephalography in 114 postlingually deafened CI users while they completed the California consonant test: a word-in-noise task. In many subjects, data were also collected on two other commonly used clinical measures of speech perception: a word-in-quiet task (consonant-nucleus-consonant) word and a sentence-in-noise task (AzBio sentences). Neural activity was assessed at a vertex electrode (Cz), which could help maximize eventual generalizability to clinical situations. The N1-P2 complex of event-related potentials (ERPs) at this location were included in multiple linear regression analyses, along with several other demographic and hearing factors as predictors of SiN performance. RESULTS In general, there was a good agreement between the scores on the three speech perception tasks. ERP amplitudes did not predict AzBio performance, which was predicted by the duration of device use, low-frequency hearing thresholds, and age. However, ERP amplitudes were strong predictors for performance for both word recognition tasks: the California consonant test (which was conducted simultaneously with electroencephalography recording) and the consonant-nucleus-consonant (conducted offline). These correlations held even after accounting for known predictors of performance including residual low-frequency hearing thresholds. In CI-users, better performance was predicted by an increased cortical response to the target word, in contrast to previous reports in normal-hearing subjects in whom speech perception ability was accounted for by the ability to suppress noise. CONCLUSIONS These data indicate a neurophysiological correlate of SiN performance, thereby revealing a richer profile of an individual's hearing performance than shown by psychoacoustic measures alone. These results also highlight important differences between sentence and word recognition measures of performance and suggest that individual differences in these measures may be underwritten by different mechanisms. Finally, the contrast with prior reports of NH listeners in the same task suggests CI-users performance may be explained by a different weighting of neural processes than NH listeners.
Collapse
Affiliation(s)
- Joel I. Berger
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Phillip E. Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, Indiana, USA
| | - Adam T. Schwalje
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Young-min Na
- Department of Biomedical Engineering, University of Ulsan, Ulsan, South Korea
| | - Ann Holmes
- Department of Psychological and Brain Sciences, University of Louisville, Louisville, Kentucky, USA
| | - Jean M. Hong
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Camille C. Dunn
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Marlan R. Hansen
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bruce J. Gantz
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
| | - Bob McMurray
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Timothy D. Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, United Kingdom
| | - Inyong Choi
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, Iowa, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
11
|
Kallioinen P, Olofsson JK, von Mentzer CN. Semantic processing in children with Cochlear Implants: A review of current N400 studies and recommendations for future research. Biol Psychol 2023; 182:108655. [PMID: 37541539 DOI: 10.1016/j.biopsycho.2023.108655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2023] [Revised: 07/28/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023]
Abstract
Deaf and hard of hearing children with cochlear implants (CI) often display impaired spoken language skills. While a large number of studies investigated brain responses to sounds in this population, relatively few focused on semantic processing. Here we summarize and discuss findings in four studies of the N400, a cortical response that reflects semantic processing, in children with CI. A study with auditory target stimuli found N400 effects at delayed latencies at 12 months after implantation, but at 18 and 24 months after implantation effects had typical latencies. In studies with visual target stimuli N400 effects were larger than or similar to controls in children with CI, despite lower semantic abilities. We propose that in children with CI, the observed large N400 effect reflects a stronger reliance on top-down predictions, relative to bottom-up language processing. Recent behavioral studies of children and adults with CI suggest that top-down processing is a common compensatory strategy, but with distinct limitations such as being effortful. A majority of the studies have small sample sizes (N < 20), and only responses to image targets were studied repeatedly in similar paradigms. This precludes strong conclusions. We give suggestions for future research and ways to overcome the scarcity of participants, including extending research to children with conventional hearing aids, an understudied group.
Collapse
Affiliation(s)
- Petter Kallioinen
- Department of Linguistics, Stockholm University, Stockholm, Sweden; Lund University Cognitive Science, Lund University, Lund, Sweden.
| | - Jonas K Olofsson
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | | |
Collapse
|
12
|
Amaral MSAD, Zamberlan-Amorin NE, Mendes KDS, Bernal SC, Massuda ET, Hyppolito MA, Reis ACMB. The P300 Auditory Evoked Potential in Cochlear Implant Users: A Scoping Review. Int Arch Otorhinolaryngol 2023; 27:e518-e527. [PMID: 37564465 PMCID: PMC10411132 DOI: 10.1055/s-0042-1744172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 01/23/2022] [Indexed: 10/17/2022] Open
Abstract
Introduction The P300 auditory evoked potential is a long-latency cortical potential evoked with auditory stimulation, which provides information on neural mechanisms underlying the central auditory processing. Objectives To identify and gather scientific evidence regarding the P300 in adult cochlear implant (CI) users. Data Synthesis A total of 87 articles, 20 of which were selected for this study, were identified and exported to the Rayyan search software. Those 20 articles did not propose a homogeneous methodology, which made comparison more difficult. Most articles (60%) in this review compare CI users with typical hearing people, showing prolonged P300 latency in CI users. Among the studies, 35% show that CI users present a smaller P300 amplitude. Another variable is the influence of the kind of stimulus used to elicit P300, which was prolonged in 30% of the studies that used pure tone stimuli, 10% of the studies that used pure tone and speech stimuli, and 60% of the studies that used speech stimuli. Conclusion This review has contributed with evidence that shows the importance of applying a controlled P300 protocol to diagnose and monitor CI users. Regardless of the stimuli used to elicit P300, we noticed a pattern in the increase in latency and decrease in amplitude in CI users. The user's experience with the CI speech processor over time and the speech test results seem to be related to the P300 latency and amplitude measurements.
Collapse
Affiliation(s)
- Maria Stella Arantes do Amaral
- Department of Ophthalmology, Otorhinolaryngology, and Head and Neck Surgery, Hospital das Clínicas, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, São Paulo, SP, Brazil
| | - Nelma Ellen Zamberlan-Amorin
- Centro Especializado de Otorrinolaringologia e Fonoaudiologia (CEOF), Hospital das Clínicas, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, São Paulo, Brazil
| | - Karina Dal Sasso Mendes
- Department of General and Specialized Nursing, Faculdade de Enfermagem de Ribeirão Preto, Universidade de São Paulo, São Paulo, Brazil
| | - Sarah Carolina Bernal
- Health Sciences Department, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, São Paulo, Brazil
| | - Eduardo Tanaka Massuda
- Department of Ophthalmology, Otorhinolaryngology, and Head and Neck Surgery, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, São Paulo, Brazil
| | - Miguel Angelo Hyppolito
- Department of Ophthalmology, Otorhinolaryngology, and Head and Neck Surgery, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, São Paulo, Brazil
| | | |
Collapse
|
13
|
Yaar-Soffer Y, Kaplan-Neeman R, Greenbom T, Habiballah S, Shapira Y, Henkin Y. A cortical biomarker of audibility and processing efficacy in children with single-sided deafness using a cochlear implant. Sci Rep 2023; 13:3533. [PMID: 36864095 PMCID: PMC9981742 DOI: 10.1038/s41598-023-30399-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Accepted: 02/22/2023] [Indexed: 03/04/2023] Open
Abstract
The goals of the current study were to evaluate audibility and cortical speech processing, and to provide insight into binaural processing in children with single-sided deafness (CHwSSD) using a cochlear implant (CI). The P1 potential to acoustically-presented speech stimuli (/m/, /g/, /t/) was recorded during monaural [Normal hearing (NH), CI], and bilateral (BIL, NH + CI) listening conditions within a clinical setting in 22 CHwSSD (mean age at CI/testing 4.7, 5.7 years). Robust P1 potentials were elicited in all children in the NH and BIL conditions. In the CI condition: (1) P1 prevalence was reduced yet was elicited in all but one child to at least one stimulus; (2) P1 latency was prolonged and amplitude was reduced, consequently leading to absence of binaural processing manifestations; (3) Correlation between P1 latency and age at CI/testing was weak and not significant; (4) P1 prevalence for /m/ was reduced and associated with CI manufacturer and duration of CI use. Results indicate that recording CAEPs to speech stimuli in clinical settings is feasible and valuable for the management of CHwSSD. While CAEPs provided evidence for effective audibility, a substantial mismatch in timing and synchrony of early-stage cortical processing between the CI and NH ear remains a barrier for the development of binaural interaction components.
Collapse
Affiliation(s)
- Y. Yaar-Soffer
- grid.413795.d0000 0001 2107 2845Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer, 52621 Ramat Gan, Israel ,grid.12136.370000 0004 1937 0546Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - R. Kaplan-Neeman
- grid.413795.d0000 0001 2107 2845Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer, 52621 Ramat Gan, Israel ,grid.12136.370000 0004 1937 0546Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - T. Greenbom
- grid.413795.d0000 0001 2107 2845Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer, 52621 Ramat Gan, Israel ,grid.12136.370000 0004 1937 0546Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - S. Habiballah
- grid.18098.380000 0004 1937 0562Department of Communication Disorders, Haifa University, Haifa, Israel ,grid.471000.2Alango Technologies LTD, Tirat Carmel, Israel
| | - Y. Shapira
- grid.413795.d0000 0001 2107 2845Department of Otolaryngology Head and Neck Surgery, Sheba Medical Center, Tel Hashomer, Israel
| | - Y. Henkin
- grid.413795.d0000 0001 2107 2845Hearing, Speech, and Language Center, Sheba Medical Center, Tel Hashomer, 52621 Ramat Gan, Israel ,grid.12136.370000 0004 1937 0546Department of Communication Disorders, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
14
|
Gürkan S, Mungan Durankaya S. The effect of sensorineural hearing loss on central auditory processing of signals in noise in older adults. Neuroreport 2023; 34:249-254. [PMID: 36789840 PMCID: PMC10516166 DOI: 10.1097/wnr.0000000000001886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 01/16/2023] [Indexed: 02/16/2023]
Abstract
OBJECTIVES The study aimed to explore the effect of sensorineural hearing loss on the central auditory processing of signals in noise using cortical auditory evoked potentials (CAEPs) in a cohort of older adults. DESIGN Three groups of individuals participated in the study. Each group included 33 older adults with normal hearing, those with mild hearing loss and those with moderate hearing loss. N1-P2 peaks of CAEPs by speech stimuli in silent conditions and with varying sound pressure levels of background noise were recorded. CAEP latencies, amplitudes and relative changes in CAEP amplitudes as a function of decreasing signal-to-noise ratios (SNR) in three groups were analyzed using the mixed analysis of variance method. RESULTS There was a significant main effect of SNR on all CAEP components, as well as significant main effects of hearing status on N1 latencies, amplitudes and relative changes in N1 amplitudes. A significant interaction was found between hearing status and SNR for relative changes in N1 amplitudes. The normal hearing group differed from both the mild and moderate hearing loss groups in terms of relative changes in N1 amplitudes at SNR 10 dB. CONCLUSION The results showed decreased amplitudes and increased latencies for N1-P2 response as the SNR of CAEP stimuli was lowered. The degree of reduction in the N1 amplitudes of the older people with normal hearing resulting from the increase in the background noise level was greater than those in their sensorineural hearing-impaired counterparts, providing evidence for decreased central inhibition for individuals with age-related hearing loss.
Collapse
Affiliation(s)
- Selhan Gürkan
- Departments of Audiometry Dokuz Eylül University, Vocational School of Health Services
| | - Serpil Mungan Durankaya
- Departments of Audiometry Dokuz Eylül University, Vocational School of Health Services
- Departments of Otorhinolaryngology, Audiology Unit, Dokuz Eylül University Hospital, İzmir, Türkiye
| |
Collapse
|
15
|
Gillis M, Kries J, Vandermosten M, Francart T. Neural tracking of linguistic and acoustic speech representations decreases with advancing age. Neuroimage 2023; 267:119841. [PMID: 36584758 PMCID: PMC9878439 DOI: 10.1016/j.neuroimage.2022.119841] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2022] [Revised: 12/21/2022] [Accepted: 12/26/2022] [Indexed: 12/29/2022] Open
Abstract
BACKGROUND Older adults process speech differently, but it is not yet clear how aging affects different levels of processing natural, continuous speech, both in terms of bottom-up acoustic analysis and top-down generation of linguistic-based predictions. We studied natural speech processing across the adult lifespan via electroencephalography (EEG) measurements of neural tracking. GOALS Our goals are to analyze the unique contribution of linguistic speech processing across the adult lifespan using natural speech, while controlling for the influence of acoustic processing. Moreover, we also studied acoustic processing across age. In particular, we focus on changes in spatial and temporal activation patterns in response to natural speech across the lifespan. METHODS 52 normal-hearing adults between 17 and 82 years of age listened to a naturally spoken story while the EEG signal was recorded. We investigated the effect of age on acoustic and linguistic processing of speech. Because age correlated with hearing capacity and measures of cognition, we investigated whether the observed age effect is mediated by these factors. Furthermore, we investigated whether there is an effect of age on hemisphere lateralization and on spatiotemporal patterns of the neural responses. RESULTS Our EEG results showed that linguistic speech processing declines with advancing age. Moreover, as age increased, the neural response latency to certain aspects of linguistic speech processing increased. Also acoustic neural tracking (NT) decreased with increasing age, which is at odds with the literature. In contrast to linguistic processing, older subjects showed shorter latencies for early acoustic responses to speech. No evidence was found for hemispheric lateralization in neither younger nor older adults during linguistic speech processing. Most of the observed aging effects on acoustic and linguistic processing were not explained by age-related decline in hearing capacity or cognition. However, our results suggest that the effect of decreasing linguistic neural tracking with advancing age at word-level is also partially due to an age-related decline in cognition than a robust effect of age. CONCLUSION Spatial and temporal characteristics of the neural responses to continuous speech change across the adult lifespan for both acoustic and linguistic speech processing. These changes may be traces of structural and/or functional change that occurs with advancing age.
Collapse
Affiliation(s)
- Marlies Gillis
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Jill Kries
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | | | | |
Collapse
|
16
|
Weise A, Grimm S, Maria Rimmele J, Schröger E. Auditory representations for long lasting sounds: Insights from event-related brain potentials and neural oscillations. BRAIN AND LANGUAGE 2023; 237:105221. [PMID: 36623340 DOI: 10.1016/j.bandl.2022.105221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2021] [Revised: 12/26/2022] [Accepted: 12/27/2022] [Indexed: 06/17/2023]
Abstract
The basic features of short sounds, such as frequency and intensity including their temporal dynamics, are integrated in a unitary representation. Knowledge on how our brain processes long lasting sounds is scarce. We review research utilizing the Mismatch Negativity event-related potential and neural oscillatory activity for studying representations for long lasting simple versus complex sounds such as sinusoidal tones versus speech. There is evidence for a temporal constraint in the formation of auditory representations: Auditory edges like sound onsets within long lasting sounds open a temporal window of about 350 ms in which the sounds' dynamics are integrated into a representation, while information beyond that window contributes less to that representation. This integration window segments the auditory input into short chunks. We argue that the representations established in adjacent integration windows can be concatenated into an auditory representation of a long sound, thus, overcoming the temporal constraint.
Collapse
Affiliation(s)
- Annekathrin Weise
- Department of Psychology, Ludwig-Maximilians-University Munich, Germany; Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Sabine Grimm
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| | - Johanna Maria Rimmele
- Department of Neuroscience, Max-Planck-Institute for Empirical Aesthetics, Germany; Center for Language, Music and Emotion, New York University, Max Planck Institute, Department of Psychology, 6 Washington Place, New York, NY 10003, United States.
| | - Erich Schröger
- Wilhelm Wundt Institute for Psychology, Leipzig University, Germany.
| |
Collapse
|
17
|
Muacevic A, Adler JR, Chu TSM, Chan J. The 100 Most-Cited Manuscripts in Hearing Implants: A Bibliometrics Analysis. Cureus 2023; 15:e33711. [PMID: 36793822 PMCID: PMC9925031 DOI: 10.7759/cureus.33711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/12/2023] [Indexed: 01/13/2023] Open
Abstract
The aim of the study was to characterise the most frequently cited articles on the topic of hearing implants. A systematic search was carried out using the Thomson Reuters Web of Science Core Collection database. Eligibility criteria restricted the results to primary studies and reviews published from 1970 to 2022 in English dealing primarily with hearing implants. Data including the authors, year of publication, journal, country of origin, number of citations and average number of citations per year were extracted, as well as the impact factors and five-year impact factor of journals publishing the articles. The top 100 papers were published across 23 journals and were cited 23,139 times. The most-cited and influential article describes the first use of the continuous interleaved sampling (CIS) strategy utilised in all modern cochlear implants. More than half of the studies on the list were produced by authors from the United States, and the Ear and Hearing journal had both the greatest number of articles and the greatest number of total citations. To conclude, this research serves as a guide to the most influential articles on the topic of hearing implants, although bibliometric analyses mainly focus on citations. The most-cited article was an influential description of CIS.
Collapse
|
18
|
Easwar V, Aiken S, Beh K, McGrath E, Galloy M, Scollie S, Purcell D. Variability in the Estimated Amplitude of Vowel-Evoked Envelope Following Responses Caused by Assumed Neurophysiologic Processing Delays. J Assoc Res Otolaryngol 2022; 23:759-769. [PMID: 36002663 PMCID: PMC9789223 DOI: 10.1007/s10162-022-00855-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/16/2022] [Indexed: 01/06/2023] Open
Abstract
Vowel-evoked envelope following responses (EFRs) reflect neural encoding of the fundamental frequency of voice (f0). Accurate analysis of EFRs elicited by natural vowels requires the use of methods like the Fourier analyzer (FA) to consider the production-related f0 changes. The FA's accuracy in estimating EFRs is, however, dependent on the assumed neurophysiological processing delay needed to time-align the f0 time course and the recorded electroencephalogram (EEG). For male-spoken vowels (f0 ~ 100 Hz), a constant 10-ms delay correction is often assumed. Since processing delays vary with stimulus and physiological factors, we quantified (i) the delay-related variability that would occur in EFR estimation, and (ii) the influence of stimulus frequency, non-f0 related neural activity, and the listener's age on such variability. EFRs were elicited by the low-frequency first formant, and mid-frequency second and higher formants of /u/, /a/, and /i/ in young adults and 6- to 17-year-old children. To time-align with the f0 time course, EEG was shifted by delays between 5 and 25 ms to encompass plausible response latencies. The delay-dependent range in EFR amplitude did not vary by stimulus frequency or age and was significantly smaller when interference from low-frequency activity was reduced. On average, the delay-dependent range was < 22% of the maximum variability in EFR amplitude that could be expected by noise. Results suggest that using a constant EEG delay correction in FA analysis does not substantially alter EFR amplitude estimation. In the present study, the lack of substantial variability was likely facilitated by using vowels with small f0 ranges.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA.
- National Acoustic Laboratories, Sydney, Australia.
| | - Steven Aiken
- School of Communication Sciences and Disorders, Dalhousie University, Nova Scotia, Canada
| | - Krystal Beh
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| | - Emma McGrath
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Mary Galloy
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Susan Scollie
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| | - David Purcell
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| |
Collapse
|
19
|
Zein-Elabedein A, Abo El-Fotoh WMM, Al Shourah WM, Moaty AS. Assessment of cognitive function in young children with type 1 diabetes mellitus using electrophysiological tests. Pediatr Diabetes 2022; 23:1080-1087. [PMID: 35700327 DOI: 10.1111/pedi.13383] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/09/2021] [Revised: 05/28/2022] [Accepted: 06/05/2022] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND/OBJECTIVES Diabetes mellitus is a chronic disease that affects many body systems, including the nervous and auditory systems. It is noted that there is a scarcity of research on the effect of diabetes on cognitive functions in particular and auditory functions in general in children with type 1 diabetes. Therefore, this study was designed to assess cognitive and auditory functions in children with type 1 diabetes mellitus and to correlate the reflection of diabetes control on cognitive functions. METHODS This study is a case-control study that included 100 children divided into two groups, the patient group, which includes 50 children with type 1 diabetes, and the control group, which consists of 50 healthy children. Subjects in the current study were submitted to pure tone audiometry, speech recognition threshold test, immittancemetry study, and measurement of cortical auditory evoked and P300 potentials (CAEPs and P300). These audiometric measures were statistically analyzed and correlated with the clinical characteristics of the study group. RESULTS The latency of P300 and CAEPs was significantly increased while the amplitude of P300 and CAEPs was significantly decreased in the patient group compared to the control group (p < 0.001). P300 and CAEPs latency has a positive correlation with HbA1c levels (r = 0.460). In addition, there was significant differences between the two groups regarding the hearing threshold at 8000 Hz, and 28% of patients had bilateral sensorineural hearing loss (SNHL) at 8 kHz. CONCLUSION The prolonged P300 and CAEPs latency and decreased amplitude in patients indicate a cognitive decline in individuals with type 1 diabetes compared to healthy individuals. HbA1c levels may increase the risk of cognitive impairment in children. In addition, the risk of bilateral SNHL increased at 8 kHz in children with type 1 diabetes mellitus.
Collapse
Affiliation(s)
| | | | | | - Asmaa Salah Moaty
- Department of ENT (Audiology Unit), Menoufia University, Shebin Elkom, Egypt
| |
Collapse
|
20
|
The Acoustic Change Complex Compared to Hearing Performance in Unilaterally and Bilaterally Deaf Cochlear Implant Users. Ear Hear 2022; 43:1783-1799. [PMID: 35696186 PMCID: PMC9592183 DOI: 10.1097/aud.0000000000001248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Clinical measures evaluating hearing performance in cochlear implant (CI) users depend on attention and linguistic skills, which limits the evaluation of auditory perception in some patients. The acoustic change complex (ACC), a cortical auditory evoked potential to a sound change, might yield useful objective measures to assess hearing performance and could provide insight in cortical auditory processing. The aim of this study is to examine the ACC in response to frequency changes as an objective measure for hearing performance in CI users. DESIGN Thirteen bilaterally deaf and six single-sided deaf subjects were included, all having used a unilateral CI for at least 1 year. Speech perception was tested with a consonant-vowel-consonant test (+10 dB signal-to-noise ratio) and a digits-in-noise test. Frequency discrimination thresholds were measured at two reference frequencies, using a 3-interval, 2-alternative forced-choice, adaptive staircase procedure. The two reference frequencies were selected using each participant's frequency allocation table and were centered in the frequency band of an electrode that included 500 or 2000 Hz, corresponding to the apical electrode or the middle electrode, respectively. The ACC was evoked with pure tones of the same two reference frequencies with varying frequency increases: within the frequency band of the middle or the apical electrode (+0.25 electrode step), and steps to the center frequency of the first (+1), second (+2), and third (+3) adjacent electrodes. RESULTS Reproducible ACCs were recorded in 17 out of 19 subjects. Most successful recordings were obtained with the largest frequency change (+3 electrode step). Larger frequency changes resulted in shorter N1 latencies and larger N1-P2 amplitudes. In both unilaterally and bilaterally deaf subjects, the N1 latency and N1-P2 amplitude of the CI ears correlated to speech perception as well as frequency discrimination, that is, short latencies and large amplitudes were indicative of better speech perception and better frequency discrimination. No significant differences in ACC latencies or amplitudes were found between the CI ears of the unilaterally and bilaterally deaf subjects, but the CI ears of the unilaterally deaf subjects showed substantially longer latencies and smaller amplitudes than their contralateral normal-hearing ears. CONCLUSIONS The ACC latency and amplitude evoked by tone frequency changes correlate well to frequency discrimination and speech perception capabilities of CI users. For patients unable to reliably perform behavioral tasks, the ACC could be of added value in assessing hearing performance.
Collapse
|
21
|
Voola M, Nguyen AT, Marinovic W, Rajan G, Tavora-Vieira D. Odd-even oddball task: Evaluating event-related potentials during word discrimination compared to speech-token and tone discrimination. Front Neurosci 2022; 16:983498. [PMID: 36312013 PMCID: PMC9614253 DOI: 10.3389/fnins.2022.983498] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/29/2022] [Indexed: 11/21/2022] Open
Abstract
Tonal and speech token auditory oddball tasks have been commonly used to assess auditory processing in various populations; however, tasks using non-word sounds may fail to capture the higher-level ability to interpret and discriminate stimuli based on meaning, which are critical to language comprehension. As such, this study examines how neural signals associated with discrimination and evaluation-processes (P3b) from semantic stimuli compare with those elicited by tones and speech tokens. This study comprises of two experiments, both containing thirteen adults with normal hearing in both ears (PTA ≤ 20 dB HL). Scalp electroencephalography and auditory event related potentials were recorded in free field while they completed three different oddball tasks: (1) tones, (2) speech tokens and (3) odd/even numbers. Based on the findings of experiment one, experiment two was conducted to understand if the difference in responses from the three tasks was attributable to stimulus duration or other factors. Therefore, in experiment one, stimulus duration was not controlled and in experiment two, the duration of each stimulus was modified to be the same across all three tasks (∼400 ms). In both experiments, P3b peak latency was significantly different between all three tasks. P3b amplitude was sensitive to reaction time, with tasks that had a large reaction time variability resulting in the P3b amplitude to be smeared, thereby reducing the amplitude size. The findings from this study highlight the need to consider all factors of the task before attributing any effects to any additional process, such as semantic processing and mental effort. Furthermore, it highlights the need for more cautious interpretation of P3b results in auditory oddball tasks.
Collapse
Affiliation(s)
- Marcus Voola
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
- *Correspondence: Marcus Voola,
| | - An T. Nguyen
- School of Population Health, Curtin University, Perth, WA, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, WA, Australia
| | - Gunesh Rajan
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Otolaryngology, Head and Neck Surgery, Luzerner Kantonsspital, Luzern, Switzerland
| | - Dayse Tavora-Vieira
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
- School of Population Health, Curtin University, Perth, WA, Australia
| |
Collapse
|
22
|
Gillis M, Van Canneyt J, Francart T, Vanthornhout J. Neural tracking as a diagnostic tool to assess the auditory pathway. Hear Res 2022; 426:108607. [PMID: 36137861 DOI: 10.1016/j.heares.2022.108607] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 08/11/2022] [Accepted: 09/12/2022] [Indexed: 11/20/2022]
Abstract
When a person listens to sound, the brain time-locks to specific aspects of the sound. This is called neural tracking and it can be investigated by analysing neural responses (e.g., measured by electroencephalography) to continuous natural speech. Measures of neural tracking allow for an objective investigation of a range of auditory and linguistic processes in the brain during natural speech perception. This approach is more ecologically valid than traditional auditory evoked responses and has great potential for research and clinical applications. This article reviews the neural tracking framework and highlights three prominent examples of neural tracking analyses: neural tracking of the fundamental frequency of the voice (f0), the speech envelope and linguistic features. Each of these analyses provides a unique point of view into the human brain's hierarchical stages of speech processing. F0-tracking assesses the encoding of fine temporal information in the early stages of the auditory pathway, i.e., from the auditory periphery up to early processing in the primary auditory cortex. Envelope tracking reflects bottom-up and top-down speech-related processes in the auditory cortex and is likely necessary but not sufficient for speech intelligibility. Linguistic feature tracking (e.g. word or phoneme surprisal) relates to neural processes more directly related to speech intelligibility. Together these analyses form a multi-faceted objective assessment of an individual's auditory and linguistic processing.
Collapse
Affiliation(s)
- Marlies Gillis
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Jana Van Canneyt
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Tom Francart
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Jonas Vanthornhout
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
23
|
Chai X, Liu M, Huang T, Wu M, Li J, Zhao X, Yan T, Song Y, Zhang YX. Neurophysiological evidence for goal-oriented modulation of speech perception. Cereb Cortex 2022; 33:3910-3921. [PMID: 35972410 DOI: 10.1093/cercor/bhac315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 07/20/2022] [Accepted: 07/21/2022] [Indexed: 11/14/2022] Open
Abstract
Speech perception depends on the dynamic interplay of bottom-up and top-down information along a hierarchically organized cortical network. Here, we test, for the first time in the human brain, whether neural processing of attended speech is dynamically modulated by task demand using a context-free discrimination paradigm. Electroencephalographic signals were recorded during 3 parallel experiments that differed only in the phonological feature of discrimination (word, vowel, and lexical tone, respectively). The event-related potentials (ERPs) revealed the task modulation of speech processing at approximately 200 ms (P2) after stimulus onset, probably influencing what phonological information to retain in memory. For the phonological comparison of sequential words, task modulation occurred later at approximately 300 ms (N3 and P3), reflecting the engagement of task-specific cognitive processes. The ERP results were consistent with the changes in delta-theta neural oscillations, suggesting the involvement of cortical tracking of speech envelopes. The study thus provides neurophysiological evidence for goal-oriented modulation of attended speech and calls for speech perception models incorporating limited memory capacity and goal-oriented optimization mechanisms.
Collapse
Affiliation(s)
- Xiaoke Chai
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Min Liu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Ting Huang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Meiyun Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Jinhong Li
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Xue Zhao
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Tingting Yan
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yan Song
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Yu-Xuan Zhang
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
24
|
Tao DD, Zhang YM, Liu H, Zhang W, Xu M, Galvin JJ, Zhang D, Liu JS. The P300 Auditory Event-Related Potential May Predict Segregation of Competing Speech by Bimodal Cochlear Implant Listeners. Front Neurosci 2022; 16:888596. [PMID: 35757527 PMCID: PMC9226716 DOI: 10.3389/fnins.2022.888596] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Accepted: 05/16/2022] [Indexed: 11/13/2022] Open
Abstract
Compared to normal-hearing (NH) listeners, cochlear implant (CI) listeners have greater difficulty segregating competing speech. Neurophysiological studies have largely investigated the neural foundations for CI listeners' speech recognition in quiet, mainly using the P300 component of event-related potentials (ERPs). P300 is closely related to cognitive processes involving auditory discrimination, selective attention, and working memory. Different from speech perception in quiet, little is known about the neurophysiological foundations for segregation of competing speech by CI listeners. In this study, ERPs were measured for a 1 vs. 2 kHz contrast in 11 Mandarin-speaking bimodal CI listeners and 11 NH listeners. Speech reception thresholds (SRTs) for a male target talker were measured in steady noise or with a male or female masker. Results showed that P300 amplitudes were significantly larger and latencies were significantly shorter for the NH than for the CI group. Similarly, SRTs were significantly better for the NH than for the CI group. Across all participants, P300 amplitude was significantly correlated with SRTs in steady noise (r = -0.65, p = 0.001) and with the competing male (r = -0.62, p = 0.002) and female maskers (r = -0.60, p = 0.003). Within the CI group, there was a significant correlation between P300 amplitude and SRTs with the male masker (r = -0.78, p = 0.005), which produced the most informational masking. The results suggest that P300 amplitude may be a clinically useful neural correlate of central auditory processing capabilities (e.g., susceptibility to informational masking) in bimodal CI patients.
Collapse
Affiliation(s)
- Duo-Duo Tao
- Department of Ear, Nose, and Throat, Shaanxi Provincial People's Hospital, Xi'An, China
| | - Yun-Mei Zhang
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Hui Liu
- Department of Ear, Nose, and Throat, Shaanxi Provincial People's Hospital, Xi'An, China
| | - Wen Zhang
- Department of Ear, Nose, and Throat, Shaanxi Provincial People's Hospital, Xi'An, China
| | - Min Xu
- Department of Ear, Nose, and Throat, Shaanxi Provincial People's Hospital, Xi'An, China
| | - John J Galvin
- House Institute Foundation, Los Angeles, CA, United States
| | - Dan Zhang
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ji-Sheng Liu
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| |
Collapse
|
25
|
Xie D, Luo J, Chao X, Li J, Liu X, Fan Z, Wang H, Xu L. Relationship Between the Ability to Detect Frequency Changes or Temporal Gaps and Speech Perception Performance in Post-lingual Cochlear Implant Users. Front Neurosci 2022; 16:904724. [PMID: 35757528 PMCID: PMC9213807 DOI: 10.3389/fnins.2022.904724] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Accepted: 05/17/2022] [Indexed: 12/03/2022] Open
Abstract
Previous studies, using modulation stimuli, on the relative effects of frequency resolution and time resolution on CI users’ speech perception failed to reach a consistent conclusion. In this study, frequency change detection and temporal gap detection were used to investigate the frequency resolution and time resolution of CI users, respectively. Psychophysical and neurophysiological methods were used to simultaneously investigate the effects of frequency and time resolution on speech perception in post-lingual cochlear implant (CI) users. We investigated the effects of psychophysical results [frequency change detection threshold (FCDT), gap detection threshold (GDT)], and acoustic change complex (ACC) responses (evoked threshold, latency, or amplitude of ACC induced by frequency change or temporal gap) on speech perception [recognition rate of monosyllabic words, disyllabic words, sentences in quiet, and sentence recognition threshold (SRT) in noise]. Thirty-one adult post-lingual CI users of Mandarin Chinese were enrolled in the study. The stimuli used to induce ACCs to frequency changes were 800-ms pure tones (fundamental frequency was 1,000 Hz); the frequency change occurred at the midpoint of the tones, with six percentages of frequency changes (0, 2, 5, 10, 20, and 50%). Temporal silences with different durations (0, 5, 10, 20, 50, and 100 ms) were inserted in the middle of the 800-ms white noise to induce ACCs evoked by temporal gaps. The FCDT and GDT were obtained by two 2-alternative forced-choice procedures. The results showed no significant correlation between the CI hearing threshold and speech perception in the study participants. In the multiple regression analysis of the influence of simultaneous psychophysical measures and ACC responses on speech perception, GDT significantly predicted every speech perception index, and the ACC amplitude evoked by the temporal gap significantly predicted the recognition of disyllabic words in quiet and SRT in noise. We conclude that when the ability to detect frequency changes and the temporal gap is considered simultaneously, the ability to detect frequency changes may have no significant effect on speech perception, but the ability to detect temporal gaps could significantly predict speech perception.
Collapse
Affiliation(s)
- Dianzhao Xie
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jianfen Luo
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xiuhua Chao
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jinming Li
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xianqi Liu
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Zhaomin Fan
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Haibo Wang
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Lei Xu
- Department of Otolaryngology-Head and Neck Surgery, Shandong Provincial ENT Hospital, Cheeloo College of Medicine, Shandong University, Jinan, China
| |
Collapse
|
26
|
Chang YP, Chang ST, Chang HW, Hong HM. Behavioral and Neural Assessments of Auditory Skill Development After Hearing Instrument Fitting in Children: Case Reports and Clinical Implications. Am J Audiol 2022; 31:586-603. [PMID: 35623330 DOI: 10.1044/2022_aja-21-00185] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to comprehensively monitor the auditory skill development of children with hearing loss after hearing instrument fitting, and a battery of four assessments was proposed. METHOD This battery was designed to fill the gap in speech discrimination in clinically available evaluations. The battery includes both behavioral and neural assessments. On the other hand, both tests in structured settings (sound-treated booth) and daily life were included in the battery. The four assessments include visual reinforced infant speech discrimination (VRISD), cortical auditory evoked potentials (CAEP), Auditory Skills Checklist (ASC), and Parents' Evaluation of Aural/Oral Performance of Children (PEACH). RESULTS Two cases were reported, and their clinical implications were discussed. CONCLUSIONS The proposed comprehensive assessment battery is suitable for evaluating children who are developmentally appropriate for visual reinforcement audiometry. More importantly, the VRISD assessment fills in the current gap, which is the discrimination stage, for the available clinical tests for assessing auditory developmental stages.
Collapse
Affiliation(s)
- Yi-ping Chang
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City, Taiwan
- Department of Audiology and Speech Language Pathology, Mackay Medical College, New Taipei City, Taiwan
| | - Shu-Ting Chang
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City, Taiwan
| | - Hsiu-Wen Chang
- Department of Audiology and Speech Language Pathology, Mackay Medical College, New Taipei City, Taiwan
- Ephphatha Listening and Language Center, Taipei City, Taiwan
| | - Hsuan-Mei Hong
- Speech and Hearing Science Research Institute, Children's Hearing Foundation, Taipei City, Taiwan
| |
Collapse
|
27
|
Cone BK, Smith S, Smith DEC. Acoustic Change Complex and Visually Reinforced Infant Speech Discrimination Measures of Vowel Contrast Detection. Ear Hear 2022; 43:531-544. [PMID: 34456301 PMCID: PMC8873241 DOI: 10.1097/aud.0000000000001116] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES To measure the effect of stimulus rate and vowel change direction on the acoustic change complex (ACC) latencies and amplitudes and compare ACC metrics to behavioral measures of vowel contrast detection for infants tested under the age of 1 year. We tested the hypothesis that the direction of spectral energy shift from a vowel change would result in differences in the ACC, owing to the sensitivity of cortical neurons to the direction of frequency change. We evaluated the effect of the stimulus rate (1/s versus 2/s) on the infants' ACC. We evaluated the ACC amplitude ratio's sensitivity (proportion of ACCs present for each change trial) and compared it to perceptual responses obtained using a visually reinforced infant speech discrimination paradigm (VRISD). This report provides normative data from infants for the ACC toward the ultimate goal of developing a clinically useful index of neural capacity for vowel discrimination. DESIGN Twenty-nine infants, nine females, 4.0 to 11.8 months of age, participated. All participants were born at full term and passed their newborn hearing screens. None had risk factors for hearing or neurologic impairment. Cortical auditory evoked potentials were obtained in response to synthesized vowel tokens /a/, /i/, /o/, and /u/ presented at a rate of 1- or 2/s in an oddball stimulus paradigm with a 25% probability of the deviant stimulus. All combinations of vowel tokens were tested at the two rates. The ACC was obtained in response to the deviant stimulus. The infants were also tested for vowel contrast detection using a VRISD paradigm with the same combinations of vowel tokens used for the ACC. The mean age at the time of the ACC test was 5.4 months, while the mean age at the behavioral test was 6.8 months. RESULTS Variations in ACC amplitude and latency occurred as a function of the initial vowel token and the contrast token. However, the hypothesis that the direction of vowel (spectral) change would result in significantly larger change responses for high-to-low spectral changes was not supported. The contrasts with /a/ as the leading vowel of the contrast pair resulted in the largest ACC amplitudes than other conditions. Significant differences in the ACC presence and amplitude were observed as a function of rate, with 2/s resulting in ACCs with the largest amplitude ratios. Latency effects of vowel contrast and rate were present, but not systematic. The ACC amplitude ratio's sensitivity for detecting a vowel contrast was greater for the 2/s rate than the 1/s rate. For an amplitude ratio criterion of ≥1.5, the sensitivity was 93% for ACC component P2-N2 at 2/s, whereas at 1/s sensitivity was 70%. VRISD tests of vowel-contrast detection had a 71% hit and a 21% false-positive rate. Many infants who could not reach performance criteria for VRISD had ACC amplitude ratios of ≥2.0. CONCLUSIONS The ACC for vowel contrasts presented at a rate of 2/s is a robust index of vowel-contrast detection when obtained in typically developing infants under the age of 1 year. The ACC is present in over 90% of infants tested at this rate when an amplitude ratio criterion of ≥1.5 is used to define a response. The amplitude ratio appears to be a sensitive metric for the difference between a control and contrast condition. The ACC can be obtained in infants who do not yet exhibit valid behavioral responses for vowel change contrasts and may be useful for estimating neural capacity for discriminating these sounds.
Collapse
Affiliation(s)
- Barbara K. Cone
- Department of Speech, Language and Hearing Sciences, The University of Arizona
| | - Spencer Smith
- Texas Auditory Neuroscience (TexAN) Lab, Department of Speech, Language and Hearing Sciences, The University of Texas at Austin
| | | |
Collapse
|
28
|
The Role of the P1 Latency in Auditory and Speech Performance Evaluation in Cochlear Implanted Children. Neural Plast 2022; 2022:6894794. [PMID: 35422857 PMCID: PMC9005287 DOI: 10.1155/2022/6894794] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2021] [Accepted: 03/08/2022] [Indexed: 11/25/2022] Open
Abstract
Auditory deprivation affects normal age-related changes in the central auditory maturation. Cochlear implants (CIs) have already become the best treatment strategy for severe to profound hearing impairment. However, it is still hard to evaluate the speech-language outcomes of the pediatric CI recipients because of hearing-impaired children with limited speech-language abilities. The cortical auditory evoked potential (CAEP) provides a window into the development of the auditory cortical pathways. This preliminary study is aimed at assessing electrophysical characteristics of P1-N1 of electrically CAEP in children with CIs and at exploring whether these changes could be accounted for in auditory and speech outcomes of these patients. CAEP responses were recorded in 48 children with CIs in response to electrical stimulus to determine the presence of the P1-N1 response. Speech perception and speech intelligibility of the implanted children were further evaluated with the categories of auditory performance (CAP) test and speech intelligibility rating (SIR) test, respectively, to explore the relationship between the latency of P1-N1 and auditory and speech performance. This study found that P1 and N1 of the intracochlear CAEP were reliably evoked in children fitted with CIs and that the latency of the P1 as opposed to that of N1 was negative in relation to the wearing time of the cochlear implant. Moreover, the latency of the P1 produced significantly negative scores in both CAP and SIR tests, which indicates that P1 latency may be reflective of the auditory performance and speech intelligibility of pediatric CI recipients. These results suggest that the latency of P1 could be used for the objective assessment of auditory and speech function evaluation in cochlear-implanted children, which would be helpful in clinical decision-making regarding intervention for young hearing-impaired children.
Collapse
|
29
|
Nogueira W, Dolhopiatenko H. Predicting speech intelligibility from a selective attention decoding paradigm in cochlear implant users. J Neural Eng 2022; 19. [PMID: 35234663 DOI: 10.1088/1741-2552/ac599f] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 03/01/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVES Electroencephalography (EEG) can be used to decode selective attention in cochlear implant (CI) users. This work investigates if selective attention to an attended speech source in the presence of a concurrent speech source can predict speech understanding in CI users. APPROACH CI users were instructed to attend to one out of two speech streams while EEG was recorded. Both speech streams were presented to the same ear and at different signal to interference ratios (SIRs). Speech envelope reconstruction of the to-be-attended speech from EEG was obtained by training decoders using regularized least squares. The correlation coefficient between the reconstructed and the attended (ρ_(A_SIR )) or the unattended (ρ_(U_SIR )) speech stream at each SIR was computed. Additionally, we computed the difference correlation coefficient at the same 〖(ρ〗_Diff= ρ_(A_SIR )-ρ_(U_SIR )) and opposite SIR (ρ_DiffOpp= ρ_(A_SIR )-ρ_(U_(-SIR) )). ρ_Diff compares the attended and unattended correlation coefficient to speech sources presented at different presentation levels depending on SIR. In contrast, ρ_DiffOpp compares the attended and unattended correlation coefficients to speech sources presented at the same presentation level irrespective of SIR. MAIN RESULTS Selective attention decoding in CI users is possible even if both speech streams are presented monaurally. A significant effect of SIR on ρ_(A_SIR ), ρ_Diff and ρ_DiffOpp, but not on ρ_(U_SIR ), was observed. Finally, the results show a significant correlation between speech understanding performance and ρ_(A_SIR ) as well as with ρ_(U_SIR ) across subjects. Moreover, ρ_DiffOpp which is less affected by the CI artifact, also demonstrated a significant correlation with speech understanding. SIGNIFICANCE Selective attention decoding in CI users is possible, however care needs to be taken with the CI artifact and the speech material used to train the decoders. These results are important for future development of objective speech understanding measures for CI users.
Collapse
Affiliation(s)
- Waldo Nogueira
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| | - Hanna Dolhopiatenko
- Department of Otolaryngology and Cluster of Excellence "Hearing4all", Hannover Medical School, Karl-Wiechert Allee, 3, Hannover, Niedersachsen, 30625, GERMANY
| |
Collapse
|
30
|
Cruz S, Crego A, Moreira C, Ribeiro E, Gonçalves Ó, Ramos R, Sampaio A. Cortical auditory evoked potentials in 1-month-old infants predict language outcomes at 12 months. INFANCY 2022; 27:324-340. [PMID: 35037391 DOI: 10.1111/infa.12454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2019] [Revised: 09/27/2021] [Accepted: 01/02/2022] [Indexed: 11/27/2022]
Abstract
The neurophysiological assessment of infants in their first developmental year can provide important information about the functional changes of the brain and supports the study of behavioral and developmental characteristics. Infants' cortical auditory evoked potentials (CAEPs) reflect cortical maturation and appear to predict subsequent language abilities. This study aimed to identify CAEP components to two auditory stimulus intensities in 1-month-old infants and to understand how these are associated with social interactive and self-regulatory behaviors. In addition, it examined whether CAEPs predicted developmental outcomes when infants were assessed at 12 months of age. At 1 month, P2 and N2 components were present for both auditory stimulus intensities, with an increased P2 amplitude being observed for the higher-intensity stimuli. We also observed that an increased P2 amplitude in the lower intensity predicted receptive and expressive language competencies at 12 months. These results are consistent with previous findings indicating an association between auditory processing and developmental outcomes in infants. This study suggests that specific auditory neurophysiological markers are associated with developmental outcomes in the first developmental year.
Collapse
Affiliation(s)
- Sara Cruz
- The Psychology for Positive Development Research Center (CIPD), Lusíada University North, Porto, Portugal
| | - Alberto Crego
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Carla Moreira
- Centre of Mathematics, School of Sciences, University of Minho, Braga, Portugal
| | - Eugénia Ribeiro
- Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Óscar Gonçalves
- Proaction Lab, CINEICC, Faculdade de Psicologia e de Ciências da Educação, Universidade de Coimbra, Coimbra, Portugal
| | - Rita Ramos
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| | - Adriana Sampaio
- Psychological Neuroscience Laboratory, Research Center in Psychology (CIPsi), School of Psychology, University of Minho, Braga, Portugal
| |
Collapse
|
31
|
Speech token detection and discrimination in individual infants using functional near-infrared spectroscopy. Sci Rep 2021; 11:24006. [PMID: 34907273 PMCID: PMC8671543 DOI: 10.1038/s41598-021-03595-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 11/29/2021] [Indexed: 11/08/2022] Open
Abstract
Speech detection and discrimination ability are important measures of hearing ability that may inform crucial audiological intervention decisions for individuals with a hearing impairment. However, behavioral assessment of speech discrimination can be difficult and inaccurate in infants, prompting the need for an objective measure of speech detection and discrimination ability. In this study, the authors used functional near-infrared spectroscopy (fNIRS) as the objective measure. Twenty-three infants, 2 to 10 months of age participated, all of whom had passed newborn hearing screening or diagnostic audiology testing. They were presented with speech tokens at a comfortable listening level in a natural sleep state using a habituation/dishabituation paradigm. The authors hypothesized that fNIRS responses to speech token detection as well as speech token contrast discrimination could be measured in individual infants. The authors found significant fNIRS responses to speech detection in 87% of tested infants (false positive rate 0%), as well as to speech discrimination in 35% of tested infants (false positive rate 9%). The results show initial promise for the use of fNIRS as an objective clinical tool for measuring infant speech detection and discrimination ability; the authors highlight the further optimizations of test procedures and analysis techniques that would be required to improve accuracy and reliability to levels needed for clinical decision-making.
Collapse
|
32
|
Beynon AJ, Luijten BM, Mylanus EAM. Intracorporeal Cortical Telemetry as a Step to Automatic Closed-Loop EEG-Based CI Fitting: A Proof of Concept. Audiol Res 2021; 11:691-705. [PMID: 34940020 PMCID: PMC8698912 DOI: 10.3390/audiolres11040062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/04/2021] [Accepted: 12/09/2021] [Indexed: 11/16/2022] Open
Abstract
Electrically evoked auditory potentials have been used to predict auditory thresholds in patients with a cochlear implant (CI). However, with exception of electrically evoked compound action potentials (eCAP), conventional extracorporeal EEG recording devices are still needed. Until now, built-in (intracorporeal) back-telemetry options are limited to eCAPs. Intracorporeal recording of auditory responses beyond the cochlea is still lacking. This study describes the feasibility of obtaining longer latency cortical responses by concatenating interleaved short recording time windows used for eCAP recordings. Extracochlear reference electrodes were dedicated to record cortical responses, while intracochlear electrodes were used for stimulation, enabling intracorporeal telemetry (i.e., without an EEG device) to assess higher cortical processing in CI recipients. Simultaneous extra- and intra-corporeal recordings showed that it is feasible to obtain intracorporeal slow vertex potentials with a CI similar to those obtained by conventional extracorporeal EEG recordings. Our data demonstrate a proof of concept of closed-loop intracorporeal auditory cortical response telemetry (ICT) with a cochlear implant device. This research breaks new ground for next generation CI devices to assess higher cortical neural processing based on acute or continuous EEG telemetry to enable individualized automatic and/or adaptive CI fitting with only a CI.
Collapse
Affiliation(s)
- Andy J. Beynon
- Vestibular & Auditory Evoked Potential Lab, Department Oto-Rhino-Laryngology, Head & Neck Surgery, 6525 EX Nijmegen, The Netherlands
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
- Correspondence:
| | - Bart M. Luijten
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| | - Emmanuel A. M. Mylanus
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| |
Collapse
|
33
|
Yu L, Zeng J, Wang S, Zhang Y. Phonetic Encoding Contributes to the Processing of Linguistic Prosody at the Word Level: Cross-Linguistic Evidence From Event-Related Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4791-4801. [PMID: 34731592 DOI: 10.1044/2021_jslhr-21-00037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
PURPOSE This study aimed to examine whether abstract knowledge of word-level linguistic prosody is independent of or integrated with phonetic knowledge. METHOD Event-related potential (ERP) responses were measured from 18 adult listeners while they listened to native and nonnative word-level prosody in speech and in nonspeech. The prosodic phonology (speech) conditions included disyllabic pseudowords spoken in Chinese and in English matched for syllabic structure, duration, and intensity. The prosodic acoustic (nonspeech) conditions were hummed versions of the speech stimuli, which eliminated the phonetic content while preserving the acoustic prosodic features. RESULTS We observed language-specific effects on the ERP that native stimuli elicited larger late negative response (LNR) amplitude than nonnative stimuli in the prosodic phonology conditions. However, no such effect was observed in the phoneme-free prosodic acoustic control conditions. CONCLUSIONS The results support the integration view that word-level linguistic prosody likely relies on the phonetic content where the acoustic cues embedded in. It remains to be examined whether the LNR may serve as a neural signature for language-specific processing of prosodic phonology beyond auditory processing of the critical acoustic cues at the suprasyllabic level.
Collapse
Affiliation(s)
- Luodi Yu
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Jiajing Zeng
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Suiping Wang
- Key Laboratory of Brain, Cognition and Education Sciences, Ministry of Education, South China Normal University, Guangzhou
- School of Psychology, Center for Studies of Psychological Application, and Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
34
|
Lunardelo PP, Hebihara Fukuda MT, Zuanetti PA, Pontes-Fernandes ÂC, Ferretti MI, Zanchetta S. Cortical auditory evoked potentials with different acoustic stimuli: Evidence of differences and similarities in coding in auditory processing disorders. Int J Pediatr Otorhinolaryngol 2021; 151:110944. [PMID: 34773882 DOI: 10.1016/j.ijporl.2021.110944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2020] [Revised: 09/05/2021] [Accepted: 10/12/2021] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The use of cortical auditory evoked potentials allows for the study of the processing of acoustic signals at the cortical level, an important step in the diagnostic evaluation process, and for the monitoring of the therapeutic process associated with auditory processing disorders (APD). The differences and similarities in the acoustic coding between different types of stimuli in the context of APD remain unknown to this date. METHODS A total of 37 children aged between 7 and 11 years, with and without APDs (identified based on verbal and non-verbal tests), all with a suitable intelligence quotient with respect to their chronological age, were assessed. Components P1 and N1 were studied using verbal and non-verbal stimuli. RESULTS The comparison between stimuli in each group revealed that the control group had higher latency and amplitude values for speech stimuli, except for the P1 amplitude, while the group with APDs had different results with respect to the amplitudes of P1 and N1, yielding higher values for speech sounds. The differences between the groups varied according to the type of stimulus: the difference was in amplitude for the verbal stimulus and latency for the non-verbal stimulus. CONCLUSION The records of components P1 and N1 revealed that the children with APDs performed the coding underlying the detection and identification of acoustic signals, whether verbal and non-verbal, according to a different pattern than the children in the control group.
Collapse
Affiliation(s)
- Pamela Papile Lunardelo
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil.
| | - Marisa Tomoe Hebihara Fukuda
- Department of Psychology, School of Fhilosophy, Sciences and Letters- Ribeirão Preto, University of São Paulo, Brazil; Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Patricia Aparecida Zuanetti
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| | - Ângela Cristina Pontes-Fernandes
- Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; University Paulista - UNIP, Ribeirão Preto, Brazil.
| | | | - Sthella Zanchetta
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, 3900 Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil; Clinical Hospital/ Ribeirão Preto Medical School-University of São Paulo, 3900, Bandeirantes Av., Postal Code 14.040-901, Ribeirão Preto, Brazil.
| |
Collapse
|
35
|
Amaral MSAD, Calderaro VG, Pauna HF, Massuda ET, Reis ACMB, Hyppolito MA. Is there a change in P300 evoked potential after 6 months in cochlear implant users? Braz J Otorhinolaryngol 2021; 88 Suppl 3:S50-S58. [PMID: 34799269 DOI: 10.1016/j.bjorl.2021.10.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Revised: 09/03/2021] [Accepted: 10/14/2021] [Indexed: 10/19/2022] Open
Abstract
OBJECTIVE There are few studies on long-latency auditory evoked potential (P300) in people with hearing loss who use a cochlear implant. Central auditory system evaluation with behavioral and electrophysiological tests is believed to help understand the neuroplasticity mechanisms involved in auditory functioning after cochlear implant surgery. This study investigated the electrophysiological processing of cortical level acoustic signals in a group of 21 adult individuals with postlingual bilateral severe-to-profound hearing loss who were submitted to cochlear implant surgery. METHODS Data were collected in three phases: pre-cochlear implant surgery, at cochlear implant activation, and 6 months after surgery. P300 measures were also registered during all phases. Tone-burst and speech stimuli were used to elicit P300 and were presented in free field. RESULTS Mean P3 component latency with tone-burst and speech stimuli were 352.9 and 321.9 ms in the pre-cochlear implant phase, 364.9 and 368.7 ms in the activation phase, 336.2 and 343.6 ms 6 months after the surgery. The P3 component mean latency values using tone-burst at activation were significantly different from those 6 months after cochlear implant. They were also significantly different using speech, between pre-cochlear implant and activation phases. Lower P3 component latency occurred 6 months after cochlear implant activation with tone-burst and pre-cochlear implant with speech stimulus. There was a weak correlation between mean P3 component latency with speech stimulus and time of hearing loss. There was no difference in amplitude between phases or in the comparison with the other variables. CONCLUSION There were changes in P3 component latency during the period assessed, for both speech and pure-tone stimuli, with increased latency in the activation phase and similar lower results in the two other phases, Pre-CI and 6 months after CI use. Mean amplitude measures did not vary in the three phases.
Collapse
Affiliation(s)
- Maria Stella Arantes do Amaral
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Departamento de Oftalmologia, Otorrinolaringologia e Cirurgia de Cabeça e Pescoço, Ribeirão Preto, SP, Brazil.
| | - Victor G Calderaro
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Departamento de Ciências da Saúde, Ribeirão Preto, SP, Brazil
| | - Henrique Furlan Pauna
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Departamento de Oftalmologia, Otorrinolaringologia e Cirurgia de Cabeça e Pescoço, Ribeirão Preto, SP, Brazil
| | - Eduardo T Massuda
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Hospital das Clínicas, Ribeirão Preto, SP, Brazil
| | - Ana Cláudia M B Reis
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Ribeirão Preto, SP, Brazil
| | - Miguel Angelo Hyppolito
- Universidade de São Paulo, Faculdade de Medicina de Ribeirão Preto, Ribeirão Preto, SP, Brazil
| |
Collapse
|
36
|
Shiroshita Y, Kirimoto H, Watanabe T, Yunoki K, Sobue I. Event-related potentials evoked by skin puncture reflect activation of Aβ fibers: comparison with intraepidermal and transcutaneous electrical stimulations. PeerJ 2021; 9:e12250. [PMID: 34707936 PMCID: PMC8504465 DOI: 10.7717/peerj.12250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 09/13/2021] [Indexed: 11/20/2022] Open
Abstract
Background Recently, event-related potentials (ERPs) evoked by skin puncture, commonly used for blood sampling, have received attention as a pain assessment tool in neonates. However, their latency appears to be far shorter than the latency of ERPs evoked by intraepidermal electrical stimulation (IES), which selectively activates nociceptive Aδ and C fibers. To clarify this important issue, we examined whether ERPs evoked by skin puncture appropriately reflect central nociceptive processing, as is the case with IES. Methods In Experiment 1, we recorded evoked potentials to the click sound produced by a lance device (click-only), lance stimulation with the click sound (click+lance), or lance stimulation with white noise (WN+lance) in eight healthy adults to investigate the effect of the click sound on the ERP evoked by skin puncture. In Experiment 2, we tested 18 heathy adults and recorded evoked potentials to shallow lance stimulation (SL) with a blade that did not reach the dermis (0.1 mm insertion depth); normal lance stimulation (CL) (1 mm depth); transcutaneous electrical stimulation (ES), which mainly activates Aβ fibers; and IES, which selectively activates Aδ fibers when low stimulation current intensities are applied. White noise was continuously presented during the experiments. The stimulations were applied to the hand dorsum. In the SL, the lance device did not touch the skin and the blade was inserted to a depth of 0.1 mm into the epidermis, where the free nerve endings of Aδ fibers are located, which minimized the tactile sensation caused by the device touching the skin and the activation of Aβ fibers by the blade reaching the dermis. In the CL, as in clinical use, the lance device touched the skin and the blade reached a depth of 1 mm from the skin surface, i.e., the depth of the dermis at which the Aβ fibers are located. Results The ERP N2 latencies for click-only (122 ± 2.9 ms) and click+lance (121 ± 6.5 ms) were significantly shorter than that for WN+lance (154 ± 7.1 ms). The ERP P2 latency for click-only (191 ± 11.3 ms) was significantly shorter than those for click+lance (249 ± 18.6 ms) and WN+lance (253 ± 11.2 ms). This suggests that the click sound shortens the N2 latency of the ERP evoked by skin puncture. The ERP N2 latencies for SL, CL, ES, and IES were 146 ± 8.3, 149 ± 9.9, 148 ± 13.1, and 197 ± 21.2 ms, respectively. The ERP P2 latencies were 250 ± 18.2, 251 ± 14.1, 237 ± 26.3, and 294 ± 30.0 ms, respectively. The ERP latency for SL was significantly shorter than that for IES and was similar to that for ES. This suggests that the penetration force generated by the blade of the lance device activates the Aβ fibers, consequently shortening the ERP latency. Conclusions Lance ERP may reflect the activation of Aβ fibers rather than Aδ fibers. A pain index that correctly and reliably reflects nociceptive processing must be developed to improve pain assessment and management in neonates.
Collapse
Affiliation(s)
- Yui Shiroshita
- Department of Nursing Science, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Hikari Kirimoto
- Department of Sensorimotor Neuroscience, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Tatsunori Watanabe
- Department of Sensorimotor Neuroscience, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Keisuke Yunoki
- Department of Sensorimotor Neuroscience, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| | - Ikuko Sobue
- Department of Nursing Science, Graduate School of Biomedical and Health Sciences, Hiroshima University, Hiroshima, Japan
| |
Collapse
|
37
|
Vander Werff KR, Niemczak CE, Morse K. Informational Masking Effects of Speech Versus Nonspeech Noise on Cortical Auditory Evoked Potentials. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4014-4029. [PMID: 34464537 DOI: 10.1044/2021_jslhr-21-00048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Background noise has been categorized as energetic masking due to spectrotemporal overlap of the target and masker on the auditory periphery or informational masking due to cognitive-level interference from relevant content such as speech. The effects of masking on cortical and sensory auditory processing can be objectively studied with the cortical auditory evoked potential (CAEP). However, whether effects on neural response morphology are due to energetic spectrotemporal differences or informational content is not fully understood. The current multi-experiment series was designed to assess the effects of speech versus nonspeech maskers on the neural encoding of speech information in the central auditory system, specifically in terms of the effects of speech babble noise maskers varying by talker number. Method CAEPs were recorded from normal-hearing young adults in response to speech syllables in the presence of energetic maskers (white or speech-shaped noise) and varying amounts of informational maskers (speech babble maskers). The primary manipulation of informational masking was the number of talkers in speech babble, and results on CAEPs were compared to those of nonspeech maskers with different temporal and spectral characteristics. Results Even when nonspeech noise maskers were spectrally shaped and temporally modulated to speech babble maskers, notable changes in the typical morphology of the CAEP in response to speech stimuli were identified in the presence of primarily energetic maskers and speech babble maskers with varying numbers of talkers. Conclusions While differences in CAEP outcomes did not reach significance by number of talkers, neural components were significantly affected by speech babble maskers compared to nonspeech maskers. These results suggest an informational masking influence on neural encoding of speech information at the sensory cortical level of auditory processing, even without active participation on the part of the listener.
Collapse
Affiliation(s)
| | - Christopher E Niemczak
- Department of Communication Sciences and Disorders, Syracuse University, NY
- Geisel School of Medicine, Dartmouth, Hanover College, NH
| | - Kenneth Morse
- Department of Communication Sciences and Disorders, Syracuse University, NY
- Division of Communication Sciences and Disorders, West Virginia University, Morgantown
| |
Collapse
|
38
|
Relationship between objective measures of hearing discrimination elicited by non-linguistic stimuli and speech perception in adults. Sci Rep 2021; 11:19554. [PMID: 34599244 PMCID: PMC8486784 DOI: 10.1038/s41598-021-98950-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 09/14/2021] [Indexed: 11/08/2022] Open
Abstract
Some people using hearing aids have difficulty discriminating between sounds even though the sounds are audible. As such, cochlear implants may provide greater benefits for speech perception. One method to identify people with auditory discrimination deficits is to measure discrimination thresholds using spectral ripple noise (SRN). Previous studies have shown that behavioral discrimination of SRN was associated with speech perception, and behavioral discrimination was also related to cortical responses to acoustic change or ACCs. We hypothesized that cortical ACCs could be directly related to speech perception. In this study, we investigated the relationship between subjective speech perception and objective ACC responses measured using SRNs. We tested 13 normal-hearing and 10 hearing-impaired adults using hearing aids. Our results showed that behavioral SRN discrimination was correlated with speech perception in quiet and in noise. Furthermore, cortical ACC responses to phase changes in the SRN were significantly correlated with speech perception. Audibility was a major predictor of discrimination and speech perception, but direct measures of auditory discrimination could contribute information about a listener’s sensitivity to acoustic cues that underpin speech perception. The findings lend support for potential application of measuring ACC responses to SRNs for identifying people who may benefit from cochlear implants.
Collapse
|
39
|
Skoe E, Krizman J, Spitzer ER, Kraus N. Auditory Cortical Changes Precede Brainstem Changes During Rapid Implicit Learning: Evidence From Human EEG. Front Neurosci 2021; 15:718230. [PMID: 34483831 PMCID: PMC8415395 DOI: 10.3389/fnins.2021.718230] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 07/20/2021] [Indexed: 11/28/2022] Open
Abstract
The auditory system is sensitive to stimulus regularities such as frequently occurring sounds and sound combinations. Evidence of regularity detection can be seen in how neurons across the auditory network, from brainstem to cortex, respond to the statistical properties of the soundscape, and in the rapid learning of recurring patterns in their environment by children and adults. Although rapid auditory learning is presumed to involve functional changes to the auditory network, the chronology and directionality of changes are not well understood. To study the mechanisms by which this learning occurs, auditory brainstem and cortical activity was simultaneously recorded via electroencephalogram (EEG) while young adults listened to novel sound streams containing recurring patterns. Neurophysiological responses were compared between easier and harder learning conditions. Collectively, the behavioral and neurophysiological findings suggest that cortical and subcortical structures each provide distinct contributions to auditory pattern learning, but that cortical sensitivity to stimulus patterns likely precedes subcortical sensitivity.
Collapse
Affiliation(s)
- Erika Skoe
- Department of Speech, Language and Hearing Sciences, Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, CT, United States
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, IL, United States
| | - Emily R Spitzer
- Department of Otolaryngology, Head and Neck Surgery, New York University Grossman School of Medicine, New York, NY, United States
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, IL, United States.,Department of Neurobiology and Physiology, Northwestern University, Evanston, IL, United States.,Department of Otolaryngology, Northwestern University, Evanston, IL, United States.,Institute for Neuroscience, Northwestern University, Evanston, IL, United States
| |
Collapse
|
40
|
Hemakom A, Jitwiriyanont S, Rugchatjaroen A, Israsena P. The development of Thai monosyllabic word and picture lists applicable to interactive speech audiometry in preschoolers. CLINICAL LINGUISTICS & PHONETICS 2021; 35:809-828. [PMID: 33146053 DOI: 10.1080/02699206.2020.1830301] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/12/2020] [Revised: 09/25/2020] [Accepted: 09/26/2020] [Indexed: 06/11/2023]
Abstract
Interactive speech audiometry is the assessment of speech comprehension and phonological discrimination through automated means. In order for the performance of such assessments in preschoolers to be successful, the employed list of words and pictures must be easily recognized both linguistically and visually. That is, the children must be able to easily associate the sound they hear with the picture they see with a high degree of certainty. To this end, a Thai monosyllabic word and picture list called NCU-20 (NECTEC-CU-20) is proposed. The word lists for Thai vowel and consonant hearing tests are designed with an awareness of phonetic environments. Regarding Thai vowels, both monophthongs and diphthongs, with all qualities and quantities, are examined. Initial consonants are categorized based on places and manners of articulation. The effectiveness of the list is objectively and subjectively verified through Thai Textbook Corpus, Thai National Corpus, Zipf scores, a listening test of preschoolers with normal hearing, and our proposed ranking systems referred to as Tier-1st, Tier-3/3, and Overall Tier. The final suggested word and picture list comprises 45 items (words) covering 35 vowels and consonant groups in the Thai Language.
Collapse
Affiliation(s)
- Apit Hemakom
- National Electronics and Computer Technology Center, Pathumthani, Thailand
| | - Sujinat Jitwiriyanont
- Department of Linguistics and Southeast Asian Linguistics Research Unit, Faculty of Arts, Chulalongkorn University, Bangkok, Thailand
| | | | - Pasin Israsena
- National Electronics and Computer Technology Center, Pathumthani, Thailand
| |
Collapse
|
41
|
Informational Masking Effects of Similarity and Uncertainty on Early and Late Stages of Auditory Cortical Processing. Ear Hear 2021; 42:1006-1023. [PMID: 33416259 DOI: 10.1097/aud.0000000000000997] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
PURPOSE Understanding speech in a background of other people talking is a difficult listening situation for hearing-impaired individuals, and even for those with normal hearing. Speech-on-speech masking is known to contribute to increased perceptual difficulty over nonspeech background noise because of informational masking provided over and above the effects of energetic masking. While informational masking research has identified factors of similarity and uncertainty between target and masker that contribute to reduced behavioral performance in speech background noise, critical gaps in knowledge including the underlying neural-perceptual processes remain. By systematically manipulating aspects of acoustic similarity and uncertainty in the same auditory paradigm, the current study examined the time course and objectively quantified these informational masking effects at both early and late stages of auditory processing using auditory evoked potentials (AEPs). METHOD Thirty participants were included in a cross-sectional repeated measures design. Target-masker similarity was manipulated by varying the linguistic/phonetic similarity (i.e., language) of the talkers in the background. Specifically, four levels representing hypothesized increasing levels of informational masking were implemented: (1) no masker (quiet); (2) Mandarin; (3) Dutch; and (4) English. Stimulus uncertainty was manipulated by task complexity, specifically presentation of target-to-target interval (TTI) in the auditory evoked paradigm. Participants had to discriminate between English word stimuli (/bæt/ and /pæt/) presented in an oddball paradigm under each masker condition pressing buttons to either the target or standard stimulus. Responses were recorded simultaneously for P1-N1-P2 (standard waveform) and P3 (target waveform). This design allowed for simultaneous recording of multiple AEP peaks, as well as accuracy, reaction time, and d' behavioral discrimination to button press responses. RESULTS Several trends in AEP components were consistent with effects of increasing linguistic/phonetic similarity and stimulus uncertainty. All babble maskers significantly affected outcomes compared to quiet. In addition, the native language English masker had the largest effect on outcomes in the AEP paradigm, including reduced P3 amplitude and area, as well as decreased accuracy and d' behavioral discrimination to target word responses. AEP outcomes for the Mandarin and Dutch maskers, however, were not significantly different across any measured component. Latency outcomes for both N1 and P3 also supported an effect of stimulus uncertainty, consistent with increased processing time related to greater task complexity. An unanticipated result was the absence of the interaction of linguistic/phonetic similarity and stimulus uncertainty. CONCLUSIONS Observable effects of both similarity and uncertainty were evidenced at a level of the P3 more than the earlier N1 level of auditory cortical processing suggesting that higher-level active auditory processing may be more sensitive to informational masking deficits. The lack of significant interaction between similarity and uncertainty at either level of processing suggests that these informational masking factors operated independently. Speech babble maskers across languages altered AEP component measures, behavioral detection, and reaction time. Specifically, this occurred when the babble was in the native/same language as the target, while the effects of foreign language maskers did not differ. The objective results from this study provide a foundation for further investigation of how the linguistic content of target and masker and task difficulty contribute to difficulty understanding speech-in-noise.
Collapse
|
42
|
Zhang Y, Pattamadilok C, Lau DKY, Bakhtiar M, Yim LY, Leung KY, Zhang C. Early Auditory Event-Related Potentials Are Modulated by Alphabetic Literacy Skills in Logographic Chinese Readers. Front Psychol 2021; 12:663166. [PMID: 34393900 PMCID: PMC8358453 DOI: 10.3389/fpsyg.2021.663166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 07/09/2021] [Indexed: 11/17/2022] Open
Abstract
The acquisition of an alphabetic orthography transforms speech processing in the human brain. Behavioral evidence shows that phonological awareness as assessed by meta-phonological tasks like phoneme judgment, is enhanced by alphabetic literacy acquisition. The current study investigates the time-course of the neuro-cognitive operations underlying this enhancement as revealed by event-related potentials (ERPs). Chinese readers with and without proficiency in Jyutping, a Romanization system of Cantonese, were recruited for an auditory onset phoneme judgment task; their behavioral responses and the elicited ERPs were examined. Proficient readers of Jyutping achieved higher response accuracy and exhibited more negative-going ERPs in three early ERP time-windows corresponding to the P1, N1, and P2 components. The phonological mismatch negativity component exhibited sensitivity to both onset and rhyme mismatch in the speech stimuli, but it was not modulated by alphabetic literacy skills. The sustained negativity in the P1-N1-P2 time-windows is interpreted as reflecting enhanced phonetic/phonological processing or attentional/awareness modulation associated with alphabetic literacy and phonological awareness skills.
Collapse
Affiliation(s)
- Yubin Zhang
- Department of Linguistics, University of Southern California, Los Angeles, CA, United States
| | - Chotiga Pattamadilok
- Laboratoire Parole et Langage (LPL), CNRS, Aix Marseille University, Aix-en-Provence, France
| | - Dustin Kai-Yan Lau
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Mehdi Bakhtiar
- Unit of Human Communication, Development, and Information Sciences, The University of Hong Kong, Hong Kong, China
| | - Long-Ying Yim
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Ka-Yui Leung
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hong Kong, China
- Research Centre for Language, Cognition, and Neuroscience, The Hong Kong Polytechnic University, Hong Kong, China
| |
Collapse
|
43
|
Lin Q, Chang Y, Liu P, Jones JA, Chen X, Peng D, Chen M, Wu C, Liu H. Cerebellar Continuous Theta Burst Stimulation Facilitates Auditory-Vocal Integration in Spinocerebellar Ataxia. Cereb Cortex 2021; 32:455-466. [PMID: 34240142 DOI: 10.1093/cercor/bhab222] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Clinical studies have shown the efficacy of transcranial magnetic stimulation in treating movement disorders in patients with spinocerebellar ataxia (SCA). However, whether similar effects occur for their speech motor disorders remains largely unknown. The present event-related potential study investigated whether and how abnormalities in auditory-vocal integration associated with SCA can be modulated by neuronavigated continuous theta burst stimulation (c-TBS) over the right cerebellum. After receiving active or sham cerebellar c-TBS, 19 patients with SCA were instructed to produce sustained vowels while hearing their voice unexpectedly pitch-shifted by ±200 cents. Behaviorally, active cerebellar c-TBS led to smaller magnitudes of vocal compensations for pitch perturbations than sham stimulation. Parallel modulatory effects were also observed at the cortical level, as reflected by increased P1 and P2 responses but decreased N1 responses elicited by active cerebellar c-TBS. Moreover, smaller magnitudes of vocal compensations were predicted by larger amplitudes of cortical P1 and P2 responses. These findings provide the first neurobehavioral evidence that c-TBS over the right cerebellum produces modulatory effects on abnormal auditory-motor integration for vocal pitch regulation in patients with SCA, offering a starting point for the treatment of speech motor disorders associated with SCA with cerebellar c-TBS.
Collapse
Affiliation(s)
- Qing Lin
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yichen Chang
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Peng Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jeffery A Jones
- Psychology Department and Laurier Centre for Cognitive Neuroscience, Wilfrid Laurier University, Waterloo, ON, Canada
| | - Xi Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Danhua Peng
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Mingyuan Chen
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Chao Wu
- Department of Neurology, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Diagnosis and Treatment of Major Neurological Diseases, National Key Clinical Department and Key Discipline of Neurology, Sun Yat-sen University, Guangzhou, China
| | - Hanjun Liu
- Department of Rehabilitation Medicine, The First Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Guangdong Provincial Key Laboratory of Brain Function and Disease, Zhongshan School of Medicine, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
44
|
Xie Z, Stakhovskaya O, Goupell MJ, Anderson S. Aging Effects on Cortical Responses to Tones and Speech in Adult Cochlear-Implant Users. J Assoc Res Otolaryngol 2021; 22:719-740. [PMID: 34231111 DOI: 10.1007/s10162-021-00804-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 05/19/2021] [Indexed: 11/29/2022] Open
Abstract
Age-related declines in auditory temporal processing contribute to speech understanding difficulties of older adults. These temporal processing deficits have been established primarily among acoustic-hearing listeners, but the peripheral and central contributions are difficult to separate. This study recorded cortical auditory evoked potentials from younger to middle-aged (< 65 years) and older (≥ 65 years) cochlear-implant (CI) listeners to assess age-related changes in temporal processing, where cochlear processing is bypassed in this population. Aging effects were compared to age-matched normal-hearing (NH) listeners. Advancing age was associated with prolonged P2 latencies in both CI and NH listeners in response to a 1000-Hz tone or a syllable /da/, and with prolonged N1 latencies in CI listeners in response to the syllable. Advancing age was associated with larger N1 amplitudes in NH listeners. These age-related changes in latency and amplitude were independent of stimulus presentation rate. Further, CI listeners exhibited prolonged N1 and P2 latencies and smaller P2 amplitudes than NH listeners. Thus, aging appears to degrade some aspects of auditory temporal processing when peripheral-cochlear contributions are largely removed, suggesting that changes beyond the cochlea may contribute to age-related temporal processing deficits.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, KS, 66160, USA.
| | - Olga Stakhovskaya
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, 20742, USA
| |
Collapse
|
45
|
Hanenberg C, Schlüter MC, Getzmann S, Lewald J. Short-Term Audiovisual Spatial Training Enhances Electrophysiological Correlates of Auditory Selective Spatial Attention. Front Neurosci 2021; 15:645702. [PMID: 34276281 PMCID: PMC8280319 DOI: 10.3389/fnins.2021.645702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Accepted: 06/09/2021] [Indexed: 11/13/2022] Open
Abstract
Audiovisual cross-modal training has been proposed as a tool to improve human spatial hearing. Here, we investigated training-induced modulations of event-related potential (ERP) components that have been associated with processes of auditory selective spatial attention when a speaker of interest has to be localized in a multiple speaker ("cocktail-party") scenario. Forty-five healthy participants were tested, including younger (19-29 years; n = 21) and older (66-76 years; n = 24) age groups. Three conditions of short-term training (duration 15 min) were compared, requiring localization of non-speech targets under "cocktail-party" conditions with either (1) synchronous presentation of co-localized auditory-target and visual stimuli (audiovisual-congruency training) or (2) immediate visual feedback on correct or incorrect localization responses (visual-feedback training), or (3) presentation of spatially incongruent auditory-target and visual stimuli presented at random positions with synchronous onset (control condition). Prior to and after training, participants were tested in an auditory spatial attention task (15 min), requiring localization of a predefined spoken word out of three distractor words, which were presented with synchronous stimulus onset from different positions. Peaks of ERP components were analyzed with a specific focus on the N2, which is known to be a correlate of auditory selective spatial attention. N2 amplitudes were significantly larger after audiovisual-congruency training compared with the remaining training conditions for younger, but not older, participants. Also, at the time of the N2, distributed source analysis revealed an enhancement of neural activity induced by audiovisual-congruency training in dorsolateral prefrontal cortex (Brodmann area 9) for the younger group. These findings suggest that cross-modal processes induced by audiovisual-congruency training under "cocktail-party" conditions at a short time scale resulted in an enhancement of correlates of auditory selective spatial attention.
Collapse
Affiliation(s)
| | | | - Stephan Getzmann
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Jörg Lewald
- Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
46
|
Alemi R, Nozaradan S, Lehmann A. Free-Field Cortical Steady-State Evoked Potentials in Cochlear Implant Users. Brain Topogr 2021; 34:664-680. [PMID: 34185222 DOI: 10.1007/s10548-021-00860-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/18/2021] [Indexed: 11/25/2022]
Abstract
Auditory steady-state evoked potentials (SS-EPs) are phase-locked neural responses to periodic stimuli, believed to reflect specific neural generators. As an objective measure, steady-state responses have been used in different clinical settings, including measuring hearing thresholds of normal and hearing-impaired subjects. Recent studies are in favor of recording these responses as a part of the cochlear implant (CI) device-fitting procedure. Considering these potential benefits, the goals of the present study were to assess the feasibility of recording free-field SS-EPs in CI users and to compare their characteristics between CI users and controls. By taking advantage of a recently developed dual-frequency tagging method, we attempted to record subcortical and cortical SS-EPs from adult CI users and controls and measured reliable subcortical and cortical SS-EPs in the control group. Independent component analysis (ICA) was used to remove CI stimulation artifacts, yet subcortical responses of several CIs were heavily contaminated by these artifacts. Consequently, only cortical SS-EPs were compared between groups, which were found to be larger in the controls. The lower cortical SS-EPs' amplitude in CI users might indicate a reduction in neural synchrony evoked by the modulation rate of the auditory input across different neural assemblies in the auditory pathway. The brain topographies of cortical auditory SS-EPs, the time course of cortical responses, and the reconstructed cortical maps were highly similar between groups, confirming their neural origin and possibility to obtain such responses also in CI recipients. As for subcortical SS-EPs, our results highlight a need for sophisticated denoising algorithms to pinpoint and remove artifactual components from the biological response.
Collapse
Affiliation(s)
- Razieh Alemi
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada.
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada.
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada.
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Ottignies-Louvain-la-Neuve, Belgium
| | - Alexandre Lehmann
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
47
|
Kamita MK, Silva LAF, Matas CG. Cortical auditory evoked potentials in autism spectrum disorder: a systematic review. Codas 2021; 33:e20190207. [PMID: 34037100 DOI: 10.1590/2317-1782/20202019207] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2019] [Accepted: 04/22/2020] [Indexed: 11/22/2022] Open
Abstract
PURPOSE To identify and analyze what are the characteristic findings of Cortical Auditory Evoked Potentials (CAEP) in children and / or adolescents with Autism Spectrum Disorder (ASD) compared to typical development, through a systematic literature review. RESEARCH STRATEGIES Based on the formulation of a research question, a bibliographic survey was carried out in seven databases (Web of Science, Pubmed, Cochrane Library, Lilacs, Scielo, Science Direct, and Google Sholar), with the following descriptors: autism spectrum disorder (transtorno do espectro autista), autistic disorder (transtorno autístico), evoked potentials, auditory (potenciais evocados auditivos), event related potentials, P300 (potencial evocado P300) e child (criança). This review was registered in Prospero, under number 118751. SELECTION CRITERIA Were selected articles published, without language limitation, between 2007 and 2019. DATA ANALYSIS The characteristics of the latency and amplitude aspects of the P1, N1, P2, N2 and P3 components present in the CAEP. RESULTS 193 studies were located; however, 15 original articles were included the inclusion criteria for this study. Although it has not been possible to identify any pattern of response for the P1, N1, P2 and N2 components, the results of the selected studies have demonstrated that individuals with ASD may present a decrease in amplitude and increase in latency of the P3 component. CONCLUSION Individuals with ASD may present different responses to the components of the CAEP, and the decrease of the amplitude and increase of the latency of the P3 component were the most common characteristics.
Collapse
Affiliation(s)
- Mariana Keiko Kamita
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina, Universidade de São Paulo - USP - São Paulo (SP), Brasil
| | - Liliane Aparecida Fagundes Silva
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina, Universidade de São Paulo - USP - São Paulo (SP), Brasil
| | - Carla Gentile Matas
- Departamento de Fisioterapia, Fonoaudiologia e Terapia Ocupacional, Faculdade de Medicina, Universidade de São Paulo - USP - São Paulo (SP), Brasil
| |
Collapse
|
48
|
Central Auditory Nervous System Stimulation through the Cochlear Implant Use and Its Behavioral Impacts: A Longitudinal Study of Case Series. Case Rep Otolaryngol 2021; 2021:8888450. [PMID: 33996165 PMCID: PMC8096579 DOI: 10.1155/2021/8888450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2020] [Accepted: 04/22/2021] [Indexed: 11/18/2022] Open
Abstract
The purpose of this study was to investigate, over a period of five years, the cortical maturation of the central auditory pathways and its impacts on the auditory and oral language development of children with effective use and without effective use of a Cochlear Implant (CI). A case series study was conducted with seven children who were CI users and seven children with normal hearing, with age- and gender-matched to CI users. The assessment was performed by long-latency auditory evoked potentials and auditory and oral language behavioral protocols. The results pronounced P1 latency decrease in all CI users in the first nine months. Over five years, five children with effective CI use presented decrease or stabilization of P1 latency and a gradual development of auditory and oral language skills, although, for most of the children, the electrophysiological and behavior results remained poor than their hearing peers' results. Two children who stopped the effective use of CI after the first year of activation had worsened auditory and oral language behavioral skills and presented increased P1 latency. A negative correlation was observed between behavioral measures and the P1 latency, the P1 component being considered an important clinical resource capable of measuring the cortical maturation and the behavioral evolution.
Collapse
|
49
|
Miller SE, Graham J, Schafer E. Auditory Sensory Gating of Speech and Nonspeech Stimuli. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1404-1412. [PMID: 33755510 DOI: 10.1044/2020_jslhr-20-00535] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Auditory sensory gating is a neural measure of inhibition and is typically measured with a click or tonal stimulus. This electrophysiological study examined if stimulus characteristics and the use of speech stimuli affected auditory sensory gating indices. Method Auditory event-related potentials were elicited using natural speech, synthetic speech, and nonspeech stimuli in a traditional auditory gating paradigm in 15 adult listeners with normal hearing. Cortical responses were recorded at 64 electrode sites, and peak amplitudes and latencies to the different stimuli were extracted. Individual data were analyzed using repeated-measures analysis of variance. Results Significant gating of P1-N1-P2 peaks was observed for all stimulus types. N1-P2 cortical responses were affected by stimulus type, with significantly less neural inhibition of the P2 response observed for natural speech compared to nonspeech and synthetic speech. Conclusions Auditory sensory gating responses can be measured using speech and nonspeech stimuli in listeners with normal hearing. The results of the study indicate the amount of gating and neural inhibition observed is affected by the spectrotemporal characteristics of the stimuli used to evoke the neural responses.
Collapse
Affiliation(s)
- Sharon E Miller
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| | - Jessica Graham
- Division of Audiology, St. Louis Children's Hospital, MO
| | - Erin Schafer
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| |
Collapse
|
50
|
Neklyudova AK, Portnova GV, Rebreikina AB, Voinova VY, Vorsanova SG, Iourov IY, Sysoeva OV. 40-Hz Auditory Steady-State Response (ASSR) as a Biomarker of Genetic Defects in the SHANK3 Gene: A Case Report of 15-Year-Old Girl with a Rare Partial SHANK3 Duplication. Int J Mol Sci 2021; 22:ijms22041898. [PMID: 33673024 PMCID: PMC7917917 DOI: 10.3390/ijms22041898] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 01/26/2021] [Accepted: 02/09/2021] [Indexed: 12/02/2022] Open
Abstract
SHANK3 encodes a scaffold protein involved in postsynaptic receptor density in glutamatergic synapses, including those in the parvalbumin (PV)+ inhibitory neurons—the key players in the generation of sensory gamma oscillations, such as 40-Hz auditory steady-state response (ASSR). However, 40-Hz ASSR was not studied in relation to SHANK3 functioning. Here, we present a 15-year-old girl (SH01) with previously unreported duplication of the first seven exons of the SHANK3 gene (22q13.33). SH01’s electroencephalogram (EEG) during 40-Hz click trains of 500 ms duration binaurally presented with inter-trial intervals of 500–800 ms were compared with those from typically developing children (n = 32). SH01 was diagnosed with mild mental retardation and learning disabilities (F70.88), dysgraphia, dyslexia, and smaller vocabulary than typically developing (TD) peers. Her clinical phenotype resembled the phenotype of previously described patients with 22q13.33 microduplications (≈30 reported so far). SH01 had mild autistic symptoms but below the threshold for ASD diagnosis and microcephaly. No seizures or MRI abnormalities were reported. While SH01 had relatively preserved auditory event-related potential (ERP) with slightly attenuated P1, her 40-Hz ASSR was totally absent significantly deviating from TD’s ASSR. The absence of 40-Hz ASSR in patients with microduplication, which affected the SHANK3 gene, indicates deficient temporal resolution of the auditory system, which might underlie language problems and represent a neurophysiological biomarker of SHANK3 abnormalities.
Collapse
Affiliation(s)
- Anastasia K. Neklyudova
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, 117485 Moscow, Russia; (A.K.N.); (G.V.P.); (A.B.R.)
| | - Galina V. Portnova
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, 117485 Moscow, Russia; (A.K.N.); (G.V.P.); (A.B.R.)
| | - Anna B. Rebreikina
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, 117485 Moscow, Russia; (A.K.N.); (G.V.P.); (A.B.R.)
| | - Victoria Yu Voinova
- Veltischev Research and Clinical Institute for Pediatrics of the Pirogov, Russian National Research Medical University, Ministry of Health of Russian Federation, 125412 Moscow, Russia; (V.Y.V.); (S.G.V.); (I.Y.I.)
- Mental Health Research Center, 117152 Moscow, Russia
| | - Svetlana G. Vorsanova
- Veltischev Research and Clinical Institute for Pediatrics of the Pirogov, Russian National Research Medical University, Ministry of Health of Russian Federation, 125412 Moscow, Russia; (V.Y.V.); (S.G.V.); (I.Y.I.)
- Mental Health Research Center, 117152 Moscow, Russia
| | - Ivan Y. Iourov
- Veltischev Research and Clinical Institute for Pediatrics of the Pirogov, Russian National Research Medical University, Ministry of Health of Russian Federation, 125412 Moscow, Russia; (V.Y.V.); (S.G.V.); (I.Y.I.)
- Mental Health Research Center, 117152 Moscow, Russia
| | - Olga V. Sysoeva
- Laboratory of Human Higher Nervous Activity, Institute of Higher Nervous Activity and Neurophysiology, Russian Academy of Science, 117485 Moscow, Russia; (A.K.N.); (G.V.P.); (A.B.R.)
- Correspondence:
| |
Collapse
|