1
|
Poe AA, Karawani H, Anderson S. Aging effects on the neural representation and perception of consonant transition cues. Hear Res 2024; 448:109034. [PMID: 38781768 PMCID: PMC11156531 DOI: 10.1016/j.heares.2024.109034] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/11/2024] [Revised: 04/16/2024] [Accepted: 05/13/2024] [Indexed: 05/25/2024]
Abstract
Older listeners have difficulty processing temporal cues that are important for word discrimination, and deficient processing may limit their ability to benefit from these cues. Here, we investigated aging effects on perception and neural representation of the consonant transition and the factors that contribute to successful perception. To further understand the neural mechanisms underlying the changes in processing from brainstem to cortex, we also examined the factors that contribute to exaggerated amplitudes in cortex. We enrolled 30 younger normal-hearing and 30 older normal-hearing participants who met the criteria of clinically normal hearing. Perceptual identification functions were obtained for the words BEAT and WHEAT on a 7-step continuum of consonant-transition duration. Auditory brainstem responses (ABRs) were recorded to click stimuli and frequency-following responses (FFRs) and cortical auditory-evoked potentials were recorded to the endpoints of the BEAT-WHEAT continuum. Perceptual performance for identification of BEAT vs. WHEAT did not differ between younger and older listeners. However, both subcortical and cortical measures of neural representation showed age group differences, such that FFR phase locking was lower but cortical amplitudes (P1 and N1) were higher in older compared to younger listeners. ABR Wave I amplitude and FFR phase locking, but not audiometric thresholds, predicted early cortical amplitudes. Phase locking to the transition region and early cortical peak amplitudes (P1) predicted performance on the perceptual identification function. Overall, results suggest that the neural representation of transition durations and cortical overcompensation may contribute to the ability to perceive transition duration contrasts. Cortical overcompensation appears to be a maladaptive response to decreased neural firing/synchrony.
Collapse
Affiliation(s)
- Abigail Anne Poe
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, Maryland, USA.
| |
Collapse
|
2
|
Bidelman G, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response (FFR). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.27.564446. [PMID: 37961324 PMCID: PMC10634913 DOI: 10.1101/2023.10.27.564446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The frequency-following response (FFR) is an evoked potential that provides a "neural fingerprint" of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning, musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed FFRs stem from a mixture of phase-locked neurogenic activity from brainstem and cortical structures along the hearing neuraxis. Here, we challenge this prevailing view by demonstrating upwards of ~50% of the FFR can originate from a non-neural source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We first establish PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses by 3-4x fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
|
3
|
Colak H, Sendesen E, Turkyilmaz MD. Subcortical auditory system in tinnitus with normal hearing: insights from electrophysiological perspective. Eur Arch Otorhinolaryngol 2024:10.1007/s00405-024-08583-3. [PMID: 38555317 DOI: 10.1007/s00405-024-08583-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 02/26/2024] [Indexed: 04/02/2024]
Abstract
PURPOSE The mechanism of tinnitus remains poorly understood; however, studies have underscored the significance of the subcortical auditory system in tinnitus perception. In this study, our aim was to investigate the subcortical auditory system using electrophysiological measurements in individuals with tinnitus and normal hearing. Additionally, we aimed to assess speech-in-noise (SiN) perception to determine whether individuals with tinnitus exhibit SiN deficits despite having normal-hearing thresholds. METHODS A total 42 normal-hearing participants, including 22 individuals with chronic subjective tinnitus and 20 normal individuals, participated in the study. We recorded auditory brainstem response (ABR) and speech-evoked frequency following response (sFFR) from the participants. SiN perception was also assessed using the Matrix test. RESULTS Our results revealed a significant prolongation of the O peak, which encodes sound offset in sFFR, for the tinnitus group (p < 0.01). The greater non-stimulus-evoked activity was also found in individuals with tinnitus (p < 0.01). In ABR, the tinnitus group showed reduced wave I amplitude and prolonged absolute wave I, III, and V latencies (p ≤ 0.02). Our findings suggested that individuals with tinnitus had poorer SiN perception compared to normal participants (p < 0.05). CONCLUSION The deficit in encoding sound offset may indicate an impaired inhibitory mechanism in tinnitus. The greater non-stimulus-evoked activity observed in the tinnitus group suggests increased neural noise at the subcortical level. Additionally, individuals with tinnitus may experience speech-in-noise deficits despite having a normal audiogram. Taken together, these findings suggest that the lack of inhibition and increased neural noise may be associated with tinnitus perception.
Collapse
Affiliation(s)
- Hasan Colak
- Biosciences Institute, Newcastle University, Newcastle Upon Tyne, UK.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
4
|
Gransier R, Carlyon RP, Richardson ML, Middlebrooks JC, Wouters J. Artifact removal by template subtraction enables recordings of the frequency following response in cochlear-implant users. Sci Rep 2024; 14:6158. [PMID: 38486005 PMCID: PMC10940306 DOI: 10.1038/s41598-024-56047-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 03/01/2024] [Indexed: 03/18/2024] Open
Abstract
Electrically evoked frequency-following responses (eFFRs) provide insight in the phase-locking ability of brainstem of cochlear-implant (CI) users. eFFRs can potentially be used to gain insight in the individual differences in the biological limitation on temporal encoding of the electrically stimulated auditory pathway, which can be inherent to the electrical stimulation itself and/or the degenerative processes associated with hearing loss. One of the major challenge of measuring eFFRs in CI users is the process of isolating the stimulation artifact from the neural response, as both the response and the artifact overlap in time and have similar frequency characteristics. Here we introduce a new artifact removal method based on template subtraction that successfully removes the stimulation artifacts from the recordings when CI users are stimulated with pulse trains from 128 to 300 pulses per second in a monopolar configuration. Our results show that, although artifact removal was successful in all CI users, the phase-locking ability of the brainstem to the different pulse rates, as assessed with the eFFR differed substantially across participants. These results show that the eFFR can be measured, free from artifacts, in CI users and that they can be used to gain insight in individual differences in temporal processing of the electrically stimulated auditory pathway.
Collapse
Affiliation(s)
- Robin Gransier
- ExpORL, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Matthew L Richardson
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
| | - John C Middlebrooks
- Department of Otolaryngology, University of California at Irvine, Irvine, CA, USA
- Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
- Departments of Neurobiology and Behavior, Biomedical Engineering, Cognitive Sciences, University of California at Irvine, Irvine, CA, USA
| | - Jan Wouters
- ExpORL, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium.
| |
Collapse
|
5
|
Momtaz S, Bidelman GM. Effects of Stimulus Rate and Periodicity on Auditory Cortical Entrainment to Continuous Sounds. eNeuro 2024; 11:ENEURO.0027-23.2024. [PMID: 38253583 PMCID: PMC10913036 DOI: 10.1523/eneuro.0027-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 01/14/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
The neural mechanisms underlying the exogenous coding and neural entrainment to repetitive auditory stimuli have seen a recent surge of interest. However, few studies have characterized how parametric changes in stimulus presentation alter entrained responses. We examined the degree to which the brain entrains to repeated speech (i.e., /ba/) and nonspeech (i.e., click) sounds using phase-locking value (PLV) analysis applied to multichannel human electroencephalogram (EEG) data. Passive cortico-acoustic tracking was investigated in N = 24 normal young adults utilizing EEG source analyses that isolated neural activity stemming from both auditory temporal cortices. We parametrically manipulated the rate and periodicity of repetitive, continuous speech and click stimuli to investigate how speed and jitter in ongoing sound streams affect oscillatory entrainment. Neuronal synchronization to speech was enhanced at 4.5 Hz (the putative universal rate of speech) and showed a differential pattern to that of clicks, particularly at higher rates. PLV to speech decreased with increasing jitter but remained superior to clicks. Surprisingly, PLV entrainment to clicks was invariant to periodicity manipulations. Our findings provide evidence that the brain's neural entrainment to complex sounds is enhanced and more sensitized when processing speech-like stimuli, even at the syllable level, relative to nonspeech sounds. The fact that this specialization is apparent even under passive listening suggests a priority of the auditory system for synchronizing to behaviorally relevant signals.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38152
- Boys Town National Research Hospital, Boys Town, Nebraska 68131
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana 47408
- Program in Neuroscience, Indiana University, Bloomington, Indiana 47405
| |
Collapse
|
6
|
Schüller A, Schilling A, Krauss P, Reichenbach T. The Early Subcortical Response at the Fundamental Frequency of Speech Is Temporally Separated from Later Cortical Contributions. J Cogn Neurosci 2024; 36:475-491. [PMID: 38165737 DOI: 10.1162/jocn_a_02103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.
Collapse
Affiliation(s)
| | | | - Patrick Krauss
- Friedrich-Alexander-Universität Erlangen-Nürnberg
- Universitätsklinikum Erlangen
| | | |
Collapse
|
7
|
McClaskey CM. Neural hyperactivity and altered envelope encoding in the central auditory system: Changes with advanced age and hearing loss. Hear Res 2024; 442:108945. [PMID: 38154191 PMCID: PMC10942735 DOI: 10.1016/j.heares.2023.108945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/04/2023] [Accepted: 12/22/2023] [Indexed: 12/30/2023]
Abstract
Temporal modulations are ubiquitous features of sound signals that are important for auditory perception. The perception of temporal modulations, or temporal processing, is known to decline with aging and hearing loss and negatively impact auditory perception in general and speech recognition specifically. However, neurophysiological literature also provides evidence of exaggerated or enhanced encoding of specifically temporal envelopes in aging and hearing loss, which may arise from changes in inhibitory neurotransmission and neuronal hyperactivity. This review paper describes the physiological changes to the neural encoding of temporal envelopes that have been shown to occur with age and hearing loss and discusses the role of disinhibition and neural hyperactivity in contributing to these changes. Studies in both humans and animal models suggest that aging and hearing loss are associated with stronger neural representations of both periodic amplitude modulation envelopes and of naturalistic speech envelopes, but primarily for low-frequency modulations (<80 Hz). Although the frequency dependence of these results is generally taken as evidence of amplified envelope encoding at the cortex and impoverished encoding at the midbrain and brainstem, there is additional evidence to suggest that exaggerated envelope encoding may also occur subcortically, though only for envelopes with low modulation rates. A better understanding of how temporal envelope encoding is altered in aging and hearing loss, and the contexts in which neural responses are exaggerated/diminished, may aid in the development of interventions, assistive devices, and treatment strategies that work to ameliorate age- and hearing-loss-related auditory perceptual deficits.
Collapse
Affiliation(s)
- Carolyn M McClaskey
- Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave, MSC 550, Charleston, SC 29425, United States.
| |
Collapse
|
8
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
9
|
Çolak H, Aydemir BE, Sakarya MD, Çakmak E, Alniaçik A, Türkyilmaz MD. Subcortical Auditory Processing and Speech Perception in Noise Among Individuals With and Without Extended High-Frequency Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:221-231. [PMID: 37956878 DOI: 10.1044/2023_jslhr-23-00023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
PURPOSE The significance of extended high-frequency (EHF) hearing (> 8 kHz) is not well understood so far. In this study, we aimed to understand the relationship between EHF hearing loss (EHFHL) and speech perception in noise (SPIN) and the associated physiological signatures using the speech-evoked frequency-following response (sFFR). METHOD Sixteen young adults with EHFHL and 16 age- and sex-matched individuals with normal hearing participated in the study. SPIN performance in right speech-right noise, left speech-left noise, and binaural listening conditions was evaluated using the Turkish Matrix Test. Additionally, subcortical auditory processing was assessed by recording sFFRs elicited by 40-ms /da/ stimuli. RESULTS Individuals with EHFHL demonstrated poorer SPIN performances in all listening conditions (p < .01). Longer latencies were observed in the V (onset) and O (offset) peaks in these individuals (p ≤ .01). However, only the V/A peak amplitude was found to be significantly reduced in individuals with EHFHL (p < .01). CONCLUSIONS Our findings highlight the importance of EHF hearing and suggest that EHF hearing should be considered among the key elements in SPIN. Individuals with EHFHL show a tendency toward weaker subcortical auditory processing, which likely contributes to their poorer SPIN performance. Thus, routine assessment of EHF hearing should be implemented in clinical settings, alongside the evaluation of standard audiometric frequencies (0.25-8 kHz).
Collapse
Affiliation(s)
- Hasan Çolak
- Department of Audiology, Baskent University, Ankara, Turkey
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | | | | - Eda Çakmak
- Department of Audiology, Baskent University, Ankara, Turkey
| | | | | |
Collapse
|
10
|
Commuri V, Kulasingham JP, Simon JZ. Cortical responses time-locked to continuous speech in the high-gamma band depend on selective attention. Front Neurosci 2023; 17:1264453. [PMID: 38156264 PMCID: PMC10752935 DOI: 10.3389/fnins.2023.1264453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/21/2023] [Indexed: 12/30/2023] Open
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of ~40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
11
|
Mosconi MW, Stevens CJ, Unruh KE, Shafer R, Elison JT. Endophenotype trait domains for advancing gene discovery in autism spectrum disorder. J Neurodev Disord 2023; 15:41. [PMID: 37993779 PMCID: PMC10664534 DOI: 10.1186/s11689-023-09511-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 11/09/2023] [Indexed: 11/24/2023] Open
Abstract
Autism spectrum disorder (ASD) is associated with a diverse range of etiological processes, including both genetic and non-genetic causes. For a plurality of individuals with ASD, it is likely that the primary causes involve multiple common inherited variants that individually account for only small levels of variation in phenotypic outcomes. This genetic landscape creates a major challenge for detecting small but important pathogenic effects associated with ASD. To address similar challenges, separate fields of medicine have identified endophenotypes, or discrete, quantitative traits that reflect genetic likelihood for a particular clinical condition and leveraged the study of these traits to map polygenic mechanisms and advance more personalized therapeutic strategies for complex diseases. Endophenotypes represent a distinct class of biomarkers useful for understanding genetic contributions to psychiatric and developmental disorders because they are embedded within the causal chain between genotype and clinical phenotype, and they are more proximal to the action of the gene(s) than behavioral traits. Despite their demonstrated power for guiding new understanding of complex genetic structures of clinical conditions, few endophenotypes associated with ASD have been identified and integrated into family genetic studies. In this review, we argue that advancing knowledge of the complex pathogenic processes that contribute to ASD can be accelerated by refocusing attention toward identifying endophenotypic traits reflective of inherited mechanisms. This pivot requires renewed emphasis on study designs with measurement of familial co-variation including infant sibling studies, family trio and quad designs, and analysis of monozygotic and dizygotic twin concordance for select trait dimensions. We also emphasize that clarification of endophenotypic traits necessarily will involve integration of transdiagnostic approaches as candidate traits likely reflect liability for multiple clinical conditions and often are agnostic to diagnostic boundaries. Multiple candidate endophenotypes associated with ASD likelihood are described, and we propose a new focus on the analysis of "endophenotype trait domains" (ETDs), or traits measured across multiple levels (e.g., molecular, cellular, neural system, neuropsychological) along the causal pathway from genes to behavior. To inform our central argument for research efforts toward ETD discovery, we first provide a brief review of the concept of endophenotypes and their application to psychiatry. Next, we highlight key criteria for determining the value of candidate endophenotypes, including unique considerations for the study of ASD. Descriptions of different study designs for assessing endophenotypes in ASD research then are offered, including analysis of how select patterns of results may help prioritize candidate traits in future research. We also present multiple candidate ETDs that collectively cover a breadth of clinical phenomena associated with ASD, including social, language/communication, cognitive control, and sensorimotor processes. These ETDs are described because they represent promising targets for gene discovery related to clinical autistic traits, and they serve as models for analysis of separate candidate domains that may inform understanding of inherited etiological processes associated with ASD as well as overlapping neurodevelopmental disorders.
Collapse
Affiliation(s)
- Matthew W Mosconi
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA.
- Clinical Child Psychology Program, University of Kansas, Lawrence, KS, USA.
| | - Cassandra J Stevens
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
- Clinical Child Psychology Program, University of Kansas, Lawrence, KS, USA
| | - Kathryn E Unruh
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
| | - Robin Shafer
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
12
|
Commuri V, Kulasingham JP, Simon JZ. Cortical Responses Time-Locked to Continuous Speech in the High-Gamma Band Depend on Selective Attention. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.20.549567. [PMID: 37546895 PMCID: PMC10401961 DOI: 10.1101/2023.07.20.549567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of approximately 40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
13
|
Momtaz S, Moncrieff D, Ray MA, Bidelman GM. Children with amblyaudia show less flexibility in auditory cortical entrainment to periodic non-speech sounds. Int J Audiol 2023; 62:920-926. [PMID: 35822427 PMCID: PMC10026530 DOI: 10.1080/14992027.2022.2094289] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2021] [Revised: 06/16/2022] [Accepted: 06/21/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE We investigated auditory temporal processing in children with amblyaudia (AMB), a subtype of auditory processing disorder (APD), via cortical neural entrainment. DESIGN AND STUDY SAMPLES Evoked responses were recorded to click-trains at slow vs. fast (8.5 vs. 14.9/s) rates in n = 14 children with AMB and n = 11 age-matched controls. Source and time-frequency analyses (TFA) decomposed EEGs into oscillations (reflecting neural entrainment) stemming from bilateral auditory cortex. RESULTS Phase-locking strength in AMB depended critically on the speed of auditory stimuli. In contrast to age-matched peers, AMB responses were largely insensitive to rate manipulations. This rate resistance occurred regardless of the ear of presentation and in both cortical hemispheres. CONCLUSIONS Children with AMB show less rate-related changes in auditory cortical entrainment. In addition to reduced capacity to integrate information between the ears, we identify more rigid tagging of external auditory stimuli. Our neurophysiological findings may account for domain-general temporal processing deficits commonly observed in AMB and related APDs behaviourally. More broadly, our findings may inform communication strategies and future rehabilitation programmes; increasing the rate of stimuli above a normal (slow) speech rate is likely to make stimulus processing more challenging for individuals with AMB/APD.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Deborah Moncrieff
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Meredith A. Ray
- Division of Epidemiology, Biostatistics, and Environmental Health, School of Public Health, University of Memphis, Memphis, TN, USA
| | - Gavin M. Bidelman
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| |
Collapse
|
14
|
Lerud KD, Hancock R, Skoe E. A high-density EEG and structural MRI source analysis of the frequency following response to missing fundamental stimuli reveals subcortical and cortical activation to low and high frequency stimuli. Neuroimage 2023; 279:120330. [PMID: 37598815 DOI: 10.1016/j.neuroimage.2023.120330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 07/29/2023] [Accepted: 08/14/2023] [Indexed: 08/22/2023] Open
Abstract
Pitch is a perceptual rather than physical phenomenon, important for spoken language use, musical communication, and other aspects of everyday life. Auditory stimuli can be designed to probe the relationship between perception and physiological responses to pitch-evoking stimuli. One technique for measuring physiological responses to pitch-evoking stimuli is the frequency following response (FFR). The FFR is an electroencephalographic (EEG) response to periodic auditory stimuli. The FFR contains nonlinearities not present in the stimuli, including correlates of the amplitude envelope of the stimulus; however, these nonlinearities remain undercharacterized. The FFR is a composite response reflecting multiple neural and peripheral generators, and their contributions to the scalp-recorded FFR vary in ill-understood ways depending on the electrode montage, stimulus, and imaging technique. The FFR is typically assumed to be generated in the auditory brainstem; there is also evidence both for and against a cortical contribution to the FFR. Here a methodology is used to examine the FFR correlates of pitch and the generators of the FFR to stimuli with different pitches. Stimuli were designed to tease apart biological correlates of pitch and amplitude envelope. FFRs were recorded with 256-electrode EEG nets, in contrast to a typical FFR setup which only contains a single active electrode. Structural MRI scans were obtained for each participant to co-register with the electrode locations and constrain a source localization algorithm. The results of this localization shed light on the generating mechanisms of the FFR, including providing evidence for both cortical and subcortical auditory sources.
Collapse
Affiliation(s)
- Karl D Lerud
- University of Maryland College Park, Institute for Systems Research, 20742, United States of America.
| | - Roeland Hancock
- Yale University, Wu Tsai Institute, 06510, United States of America
| | - Erika Skoe
- University of Connecticut, Department of Speech, Language, and Hearing Sciences, Cognitive Sciences Program, 06269, United States of America
| |
Collapse
|
15
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
16
|
Rizzi R, Bidelman GM. Duplex perception reveals brainstem auditory representations are modulated by listeners' ongoing percept for speech. Cereb Cortex 2023; 33:10076-10086. [PMID: 37522248 PMCID: PMC10502779 DOI: 10.1093/cercor/bhad266] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/27/2023] [Accepted: 07/10/2023] [Indexed: 08/01/2023] Open
Abstract
So-called duplex speech stimuli with perceptually ambiguous spectral cues to one ear and isolated low- versus high-frequency third formant "chirp" to the opposite ear yield a coherent percept supporting their phonetic categorization. Critically, such dichotic sounds are only perceived categorically upon binaural integration. Here, we used frequency-following responses (FFRs), scalp-recorded potentials reflecting phase-locked subcortical activity, to investigate brainstem responses to fused speech percepts and to determine whether FFRs reflect binaurally integrated category-level representations. We recorded FFRs to diotic and dichotic stop-consonants (/da/, /ga/) that either did or did not require binaural fusion to properly label along with perceptually ambiguous sounds without clear phonetic identity. Behaviorally, listeners showed clear categorization of dichotic speech tokens confirming they were heard with a fused, phonetic percept. Neurally, we found FFRs were stronger for categorically perceived speech relative to category-ambiguous tokens but also differentiated phonetic categories for both diotically and dichotically presented speech sounds. Correlations between neural and behavioral data further showed FFR latency predicted the degree to which listeners labeled tokens as "da" versus "ga." The presence of binaurally integrated, category-level information in FFRs suggests human brainstem processing reflects a surprisingly abstract level of the speech code typically circumscribed to much later cortical processing.
Collapse
Affiliation(s)
- Rose Rizzi
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
| | - Gavin M Bidelman
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- Cognitive Science Program, Indiana University, Bloomington, IN, United States
| |
Collapse
|
17
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
18
|
Boothalingam S, Peterson A, Powell L, Easwar V. Auditory brainstem mechanisms likely compensate for self-imposed peripheral inhibition. Sci Rep 2023; 13:12693. [PMID: 37542191 PMCID: PMC10403563 DOI: 10.1038/s41598-023-39850-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/01/2023] [Indexed: 08/06/2023] Open
Abstract
Feedback networks in the brain regulate downstream auditory function as peripheral as the cochlea. However, the upstream neural consequences of this peripheral regulation are less understood. For instance, the medial olivocochlear reflex (MOCR) in the brainstem causes putative attenuation of responses generated in the cochlea and cortex, but those generated in the brainstem are perplexingly unaffected. Based on known neural circuitry, we hypothesized that the inhibition of peripheral input is compensated for by positive feedback in the brainstem over time. We predicted that the inhibition could be captured at the brainstem with shorter (1.5 s) than previously employed long duration (240 s) stimuli where this inhibition is likely compensated for. Results from 16 normal-hearing human listeners support our hypothesis in that when the MOCR is activated, there is a robust reduction of responses generated at the periphery, brainstem, and cortex for short-duration stimuli. Such inhibition at the brainstem, however, diminishes for long-duration stimuli suggesting some compensatory mechanisms at play. Our findings provide a novel non-invasive window into potential gain compensation mechanisms in the brainstem that may have implications for auditory disorders such as tinnitus. Our methodology will be useful in the evaluation of efferent function in individuals with hearing loss.
Collapse
Affiliation(s)
- Sriram Boothalingam
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA.
- Macquarie University, Sydney, NSW, 2109, Australia.
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia.
| | - Abigayle Peterson
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
| | - Lindsey Powell
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
| | - Vijayalakshmi Easwar
- Waisman Center and Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, WI, 53705, USA
- Macquarie University, Sydney, NSW, 2109, Australia
- National Acoustic Laboratories, Sydney, NSW, 2109, Australia
| |
Collapse
|
19
|
Omidvar S, Mochiatti Guijo L, Duda V, Costa-Faidella J, Escera C, Koravand A. Can auditory evoked responses elicited to click and/or verbal sound identify children with or at risk of central auditory processing disorder: A scoping review. Int J Pediatr Otorhinolaryngol 2023; 171:111609. [PMID: 37393698 DOI: 10.1016/j.ijporl.2023.111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 04/26/2023] [Accepted: 06/01/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND (Central) auditory processing disorders, (C)APDs are clinically identified using behavioral tests. However, changes in attention and motivation may easily affect true identification. Although auditory electrophysiological tests, such as Auditory Brainstem Responses (ABR), are independent of most confounding cognitive factors, there is no consensus that click and/or speech-evoked ABR can be used to identify children with or at-risk of (C)APDs due to heterogeneity among studies. AIMS This study aimed to review the possibility of using ABR evoked by click and/or speech stimuli to identify children with or at risk of (C)APDs. METHODS The online databases of PubMed, Web of Science, Medline, Embase, and CINAHL were explored using combined keywords for all English and French articles published until April 2021. Additional gray literature was also included such as conference abstracts, dissertations, and editorials in ProQuest Dissertations. MAIN CONTRIBUTION Thirteen papers met the eligibility criteria and were included in the scoping review. Fourteen papers were cross-sectional and two were interventional studies. Eleven papers used click stimuli to assess children with/at risk of (C)APDs, and speech stimuli were utilized in the remaining studies. Despite the diversity of the results, especially in click ABR assessments, most studies indicated increases in the wave latencies and/or decreases in the wave amplitudes of click ABR in children with/at risk of (C)APDs. The results of speech ABR assessments were more consistent, as prolongation of the transient components of speech ABR was observed in these children, while sustained components remained almost unchanged. CONCLUSIONS Although both click and speech-evoked ABRs could be used to assess children with (C)APDs, it appears that speech-evoked ABR assessments yield more reliable findings. These findings, however, should be interpreted with caution given the heterogeneity among studies. Well-designed studies on children with confirmed (C)APDs using standard diagnostic and assessment protocols are recommended.
Collapse
Affiliation(s)
- Shaghayegh Omidvar
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada.
| | - Laura Mochiatti Guijo
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada; School of Speech-Language Pathology and Audiology, Sao Paulo State University "Júlio de Mesquita Filho" - UNESP, Marília, SP, Brazil.
| | - Victoria Duda
- École d'orthophonie et d'audiologie, Université de Montréal, Québec, Canada.
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institute de Recerca Sant Joan de Déu, Esplugues de Llobregat, Catalonia, Spain.
| | - Carless Escera
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institute de Recerca Sant Joan de Déu, Esplugues de Llobregat, Catalonia, Spain.
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada.
| |
Collapse
|
20
|
Pascarella A, Mikulan E, Sciacchitano F, Sarasso S, Rubino A, Sartori I, Cardinale F, Zauli F, Avanzini P, Nobili L, Pigorini A, Sorrentino A. An in-vivo validation of ESI methods with focal sources. Neuroimage 2023:120219. [PMID: 37307867 DOI: 10.1016/j.neuroimage.2023.120219] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 05/05/2023] [Accepted: 06/02/2023] [Indexed: 06/14/2023] Open
Abstract
Electrophysiological source imaging (ESI) aims at reconstructing the precise origin of brain activity from measurements of the electric field on the scalp. Across laboratories/research centers/hospitals, ESI is performed with different methods, partly due to the ill-posedness of the underlying mathematical problem. However, it is difficult to find systematic comparisons involving a wide variety of methods. Further, existing comparisons rarely take into account the variability of the results with respect to the input parameters. Finally, comparisons are typically performed using either synthetic data, or in-vivo data where the ground-truth is only roughly known. We use an in-vivo high-density EEG dataset recorded during intracranial single pulse electrical stimulation, in which the true sources are substantially dipolar and their locations are precisely known. We compare ten different ESI methods, using their implementation in the MNE-Python package: MNE, dSPM, LORETA, sLORETA, eLORETA, LCMV beamformers, irMxNE, Gamma Map, SESAME and dipole fitting. We perform comparisons under multiple choices of input parameters, to assess the accuracy of the best reconstruction, as well as the impact of such parameters on the localization performance. Best reconstructions often fall within 1 cm from the true source, with most accurate methods hitting an average localization error of 1.2 cm and outperforming least accurate ones erring by 2.5 cm. As expected, dipolar and sparsity-promoting methods tend to outperform distributed methods. For several distributed methods, the best regularization parameter turned out to be the one in principle associated with low SNR, despite the high SNR of the available dataset. Depth weighting played no role for two out of the six methods implementing it. Sensitivity to input parameters varied widely between methods. While one would expect high variability being associated with low localization error at the best solution, this is not always the case, with some methods producing highly variable results and high localization error, and other methods producing stable results with low localization error. In particular, recent dipolar and sparsity-promoting methods provide significantly better results than older distributed methods. As we repeated the tests with "conventional" (32 channels) and dense (64, 128, 256 channels) EEG recordings, we observed little impact of the number of channels on localization accuracy; however, for distributed methods denser montages provide smaller spatial dispersion. Overall findings confirm that EEG is a reliable technique for localization of point sources and therefore reinforce the importance that ESI may have in the clinical context, especially when applied to identify the surgical target in potential candidates for epilepsy surgery.
Collapse
Affiliation(s)
| | - Ezequiel Mikulan
- Department of Biomedical and Clinical Sciences "L. Sacco",Università degli Studi di Milano, Milan, Italy
| | | | - Simone Sarasso
- Department of Biomedical and Clinical Sciences "L. Sacco",Università degli Studi di Milano, Milan, Italy
| | - Annalisa Rubino
- Department of Neurosciences, Center for Epilepsy Surgery "C. Munari", Hospital Niguarda, Milan, Italy
| | - Ivana Sartori
- Department of Neurosciences, Center for Epilepsy Surgery "C. Munari", Hospital Niguarda, Milan, Italy
| | - Francesco Cardinale
- Department of Neurosciences, Center for Epilepsy Surgery "C. Munari", Hospital Niguarda, Milan, Italy
| | - Flavia Zauli
- Department of Biomedical and Clinical Sciences "L. Sacco",Università degli Studi di Milano, Milan, Italy
| | | | - Lino Nobili
- Child Neuropsychiatry Unit, IRCCS "G. Gaslini" Institute, Genoa, Italy; DINOGMI, Università degli Studi di Genova, Genoa, Italy
| | - Andrea Pigorini
- Department of Biomedical, Surgical and Dental Sciences, Università degli Studi di Milano, Milan, Italy
| | - Alberto Sorrentino
- Department of Mathematics, Università degli Studi di Genova, Genoa, Italy.
| |
Collapse
|
21
|
Rizzi R, Bidelman GM. Duplex perception reveals brainstem auditory representations are modulated by listeners' ongoing percept for speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.05.09.540018. [PMID: 37214801 PMCID: PMC10197666 DOI: 10.1101/2023.05.09.540018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
So-called duplex speech stimuli with perceptually ambiguous spectral cues to one ear and isolated low- vs. high-frequency third formant "chirp" to the opposite ear yield a coherent percept supporting their phonetic categorization. Critically, such dichotic sounds are only perceived categorically upon binaural integration. Here, we used frequency-following responses (FFRs), scalp-recorded potentials reflecting phase-locked subcortical activity, to investigate brainstem responses to fused speech percepts and to determine whether FFRs reflect binaurally integrated category-level representations. We recorded FFRs to diotic and dichotic stop-consonants (/da/, /ga/) that either did or did not require binaural fusion to properly label along with perceptually ambiguous sounds without clear phonetic identity. Behaviorally, listeners showed clear categorization of dichotic speech tokens confirming they were heard with a fused, phonetic percept. Neurally, we found FFRs were stronger for categorically perceived speech relative to category-ambiguous tokens but also differentiated phonetic categories for both diotically and dichotically presented speech sounds. Correlations between neural and behavioral data further showed FFR latency predicted the degree to which listeners labeled tokens as "da" vs. "ga". The presence of binaurally integrated, category-level information in FFRs suggests human brainstem processing reflects a surprisingly abstract level of the speech code typically circumscribed to much later cortical processing.
Collapse
Affiliation(s)
- Rose Rizzi
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M. Bidelman
- Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
22
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
23
|
Lai J, Alain C, Bidelman GM. Cortical-brainstem interplay during speech perception in older adults with and without hearing loss. Front Neurosci 2023; 17:1075368. [PMID: 36816123 PMCID: PMC9932544 DOI: 10.3389/fnins.2023.1075368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Real time modulation of brainstem frequency-following responses (FFRs) by online changes in cortical arousal state via the corticofugal (top-down) pathway has been demonstrated previously in young adults and is more prominent in the presence of background noise. FFRs during high cortical arousal states also have a stronger relationship with speech perception. Aging is associated with increased auditory brain responses, which might reflect degraded inhibitory processing within the peripheral and ascending pathways, or changes in attentional control regulation via descending auditory pathways. Here, we tested the hypothesis that online corticofugal interplay is impacted by age-related hearing loss. Methods We measured EEG in older adults with normal-hearing (NH) and mild to moderate hearing-loss (HL) while they performed speech identification tasks in different noise backgrounds. We measured α power to index online cortical arousal states during task engagement. Subsequently, we split brainstem speech-FFRs, on a trial-by-trial basis, according to fluctuations in concomitant cortical α power into low or high α FFRs to index cortical-brainstem modulation. Results We found cortical α power was smaller in the HL than the NH group. In NH listeners, α-FFRs modulation for clear speech (i.e., without noise) also resembled that previously observed in younger adults for speech in noise. Cortical-brainstem modulation was further diminished in HL older adults in the clear condition and by noise in NH older adults. Machine learning classification showed low α FFR frequency spectra yielded higher accuracy for classifying listeners' perceptual performance in both NH and HL participants. Moreover, low α FFRs decreased with increased hearing thresholds at 0.5-2 kHz for clear speech but noise generally reduced low α FFRs in the HL group. Discussion Collectively, our study reveals cortical arousal state actively shapes brainstem speech representations and provides a potential new mechanism for older listeners' difficulties perceiving speech in cocktail party-like listening situations in the form of a miss-coordination between cortical and subcortical levels of auditory processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States,Program in Neuroscience, Indiana University, Bloomington, IN, United States,*Correspondence: Gavin M. Bidelman,
| |
Collapse
|
24
|
Omidvar S, Duquette-Laplante F, Bursch C, Jutras B, Koravand A. Assessing Auditory Processing in Children with Listening Difficulties: A Pilot Study. J Clin Med 2023; 12:jcm12030897. [PMID: 36769544 PMCID: PMC9917704 DOI: 10.3390/jcm12030897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Auditory processing disorders (APD) may be one of the problems experienced by children with listening difficulties (LiD). The combination of auditory behavioural and electrophysiological tests could help to provide a better understanding of the abilities/disabilities of children with LiD. The current study aimed to quantify the auditory processing abilities and function in children with LiD. METHODS Twenty children, ten with LiD (age = 8.46; SD = 1.39) and ten typically developing (TD) (age = 9.45; SD = 1.57) participated in this study. All children were evaluated with auditory processing tests as well as with attention and phonemic synthesis tasks. Electrophysiological measures were also conducted with click and speech auditory brainstem responses (ABR). RESULTS Children with LiD performed significantly worse than TD children for most behavioural tasks, indicating shortcomings in functional auditory processing. Moreover, the click-ABR wave I amplitude was smaller, and the speech-ABR waves D and E latencies were longer for the LiD children compared to the results of TD children. No significant difference was found when evaluating neural correlates between groups. CONCLUSIONS Combining behavioural testing with click-ABR and speech-ABR can highlight functional and neurophysiological deficiencies in children with learning and listening issues, especially at the brainstem level.
Collapse
Affiliation(s)
- Shaghayegh Omidvar
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
| | - Fauve Duquette-Laplante
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
| | | | - Benoît Jutras
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
- Research Centre, CHU Sainte-Justine, Montreal, QC H3T 1C5, Canada
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- Correspondence:
| |
Collapse
|
25
|
Van Der Biest H, Keshishzadeh S, Keppler H, Dhooge I, Verhulst S. Envelope following responses for hearing diagnosis: Robustness and methodological considerations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:191. [PMID: 36732231 DOI: 10.1121/10.0016807] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 06/18/2023]
Abstract
Recent studies have found that envelope following responses (EFRs) are a marker of age-related and noise- or ototoxic-induced cochlear synaptopathy (CS) in research animals. Whereas the cochlear injury can be well controlled in animal research studies, humans may have an unknown mixture of sensorineural hearing loss [SNHL; e.g., inner- or outer-hair-cell (OHC) damage or CS] that cannot be teased apart in a standard hearing evaluation. Hence, a direct translation of EFR markers of CS to a differential CS diagnosis in humans might be compromised by the influence of SNHL subtypes and differences in recording modalities between research animals and humans. To quantify the robustness of EFR markers for use in human studies, this study investigates the impact of methodological considerations related to electrode montage, stimulus characteristics, and presentation, as well as analysis method on human-recorded EFR markers. The main focus is on rectangularly modulated pure-tone stimuli to evoke the EFR based on a recent auditory modelling study that showed that the EFR was least affected by OHC damage and most sensitive to CS in this stimulus configuration. The outcomes of this study can help guide future clinical implementations of electroencephalography-based SNHL diagnostic tests.
Collapse
Affiliation(s)
- Heleen Van Der Biest
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| | - Sarineh Keshishzadeh
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences-Audiology, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
| | - Sarah Verhulst
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| |
Collapse
|
26
|
Mai G, Howell P. The possible role of early-stage phase-locked neural activities in speech-in-noise perception in human adults across age and hearing loss. Hear Res 2023; 427:108647. [PMID: 36436293 DOI: 10.1016/j.heares.2022.108647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/26/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022]
Abstract
Ageing affects auditory neural phase-locked activities which could increase the challenges experienced during speech-in-noise (SiN) perception by older adults. However, evidence for how ageing affects SiN perception through these phase-locked activities is still lacking. It is also unclear whether influences of ageing on phase-locked activities in response to different acoustic properties have similar or different mechanisms to affect SiN perception. The present study addressed these issues by measuring early-stage phase-locked encoding of speech under quiet and noisy backgrounds (speech-shaped noise (SSN) and multi-talker babbles) in adults across a wide age range (19-75 years old). Participants passively listened to a repeated vowel whilst the frequency-following response (FFR) to fundamental frequency that has primary subcortical sources and cortical phase-locked response to slowly-fluctuating acoustic envelopes were recorded. We studied how these activities are affected by age and age-related hearing loss and how they are related to SiN performances (word recognition in sentences in noise). First, we found that the effects of age and hearing loss differ for the FFR and slow-envelope phase-locking. FFR was significantly decreased with age and high-frequency (≥ 2 kHz) hearing loss but increased with low-frequency (< 2 kHz) hearing loss, whilst the slow-envelope phase-locking was significantly increased with age and hearing loss across frequencies. Second, potential relationships between the types of phase-locked activities and SiN perception performances were also different. We found that the FFR and slow-envelope phase-locking positively corresponded to SiN performance under multi-talker babbles and SSN, respectively. Finally, we investigated how age and hearing loss affected SiN perception through phase-locked activities via mediation analyses. We showed that both types of activities significantly mediated the relation between age/hearing loss and SiN perception but in distinct manners. Specifically, FFR decreased with age and high-frequency hearing loss which in turn contributed to poorer SiN performance but increased with low-frequency hearing loss which in turn contributed to better SiN performance under multi-talker babbles. Slow-envelope phase-locking increased with age and hearing loss which in turn contributed to better SiN performance under both SSN and multi-talker babbles. Taken together, the present study provided evidence for distinct neural mechanisms of early-stage auditory phase-locked encoding of different acoustic properties through which ageing affects SiN perception.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK; Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK; Department of Experimental Psychology, University College London, London WC1H 0AP, UK.
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
27
|
Easwar V, Purcell D, Wright T. Predicting Hearing aid Benefit Using Speech-Evoked Envelope Following Responses in Children With Hearing Loss. Trends Hear 2023; 27:23312165231151468. [PMID: 36946195 PMCID: PMC10034298 DOI: 10.1177/23312165231151468] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Revised: 12/24/2022] [Accepted: 12/30/2022] [Indexed: 03/23/2023] Open
Abstract
Electroencephalography could serve as an objective tool to evaluate hearing aid benefit in infants who are developmentally unable to participate in hearing tests. We investigated whether speech-evoked envelope following responses (EFRs), a type of electroencephalography-based measure, could predict improved audibility with the use of a hearing aid in children with mild-to-severe permanent, mainly sensorineural, hearing loss. In 18 children, EFRs were elicited by six male-spoken band-limited phonemic stimuli--the first formants of /u/ and /i/, the second and higher formants of /u/ and /i/, and the fricatives /s/ and /∫/--presented together as /su∫i/. EFRs were recorded between the vertex and nape, when /su∫i/ was presented at 55, 65, and 75 dB SPL using insert earphones in unaided conditions and individually fit hearing aids in aided conditions. EFR amplitude and detectability improved with the use of a hearing aid, and the degree of improvement in EFR amplitude was dependent on the extent of change in behavioral thresholds between unaided and aided conditions. EFR detectability was primarily influenced by audibility; higher sensation level stimuli had an increased probability of detection. Overall EFR sensitivity in predicting audibility was significantly higher in aided (82.1%) than unaided conditions (66.5%) and did not vary as a function of stimulus or frequency. EFR specificity in ascertaining inaudibility was 90.8%. Aided improvement in EFR detectability was a significant predictor of hearing aid-facilitated change in speech discrimination accuracy. Results suggest that speech-evoked EFRs could be a useful objective tool in predicting hearing aid benefit in children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
- National
Acoustic Laboratories, Macquarie
University, Sydney, New South Wales, Australia
| | - David Purcell
- School of Communication Sciences and Disorders,
Western
University, London, Canada
- National Centre for Audiology, Western
University, London, Canada
| | - Trevor Wright
- Department of Communication Sciences and Disorders & Waisman
Center, University of
Wisconsin–Madison, Madison, USA
| |
Collapse
|
28
|
Easwar V, Aiken S, Beh K, McGrath E, Galloy M, Scollie S, Purcell D. Variability in the Estimated Amplitude of Vowel-Evoked Envelope Following Responses Caused by Assumed Neurophysiologic Processing Delays. J Assoc Res Otolaryngol 2022; 23:759-769. [PMID: 36002663 PMCID: PMC9789223 DOI: 10.1007/s10162-022-00855-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Accepted: 06/16/2022] [Indexed: 01/06/2023] Open
Abstract
Vowel-evoked envelope following responses (EFRs) reflect neural encoding of the fundamental frequency of voice (f0). Accurate analysis of EFRs elicited by natural vowels requires the use of methods like the Fourier analyzer (FA) to consider the production-related f0 changes. The FA's accuracy in estimating EFRs is, however, dependent on the assumed neurophysiological processing delay needed to time-align the f0 time course and the recorded electroencephalogram (EEG). For male-spoken vowels (f0 ~ 100 Hz), a constant 10-ms delay correction is often assumed. Since processing delays vary with stimulus and physiological factors, we quantified (i) the delay-related variability that would occur in EFR estimation, and (ii) the influence of stimulus frequency, non-f0 related neural activity, and the listener's age on such variability. EFRs were elicited by the low-frequency first formant, and mid-frequency second and higher formants of /u/, /a/, and /i/ in young adults and 6- to 17-year-old children. To time-align with the f0 time course, EEG was shifted by delays between 5 and 25 ms to encompass plausible response latencies. The delay-dependent range in EFR amplitude did not vary by stimulus frequency or age and was significantly smaller when interference from low-frequency activity was reduced. On average, the delay-dependent range was < 22% of the maximum variability in EFR amplitude that could be expected by noise. Results suggest that using a constant EEG delay correction in FA analysis does not substantially alter EFR amplitude estimation. In the present study, the lack of substantial variability was likely facilitated by using vowels with small f0 ranges.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA.
- National Acoustic Laboratories, Sydney, Australia.
| | - Steven Aiken
- School of Communication Sciences and Disorders, Dalhousie University, Nova Scotia, Canada
| | - Krystal Beh
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| | - Emma McGrath
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Mary Galloy
- Department of Communication Sciences and Disorders & Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Susan Scollie
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| | - David Purcell
- Department of Communication Sciences and Disorders & National Centre for Audiology, Western University, London, ON, Canada
| |
Collapse
|
29
|
Boothalingam S, Easwar V, Bross A. External and middle ear influence on envelope following responses. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:2794. [PMID: 36456277 DOI: 10.1121/10.0015004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Accepted: 10/11/2022] [Indexed: 06/17/2023]
Abstract
Considerable between-subject variability in envelope following response (EFR) amplitude limits its clinical translation. Based on a pattern of lower amplitude and larger variability in the low (<1.2 kHz) and high (>8 kHz), relative to mid (1-3 kHz) frequency carriers, we hypothesized that the between-subject variability in external and middle ear (EM) contribute to between-subject variability in EFR amplitude. It is predicted that equalizing the stimulus reaching the cochlea by accounting for EM differences using forward pressure level (FPL) calibration would at least partially improve response amplitude and reduce between-subject variability. In 21 young normal hearing adults, EFRs of four modulation rates (91, 96, 101, and 106 Hz) were measured concurrently from four frequency bands [low (0.091-1.2 kHz), mid (1-3 kHz), high (4-5.4 kHz), and very high (vHigh; 8-9.4 kHz)], respectively, with 12 harmonics each. The results indicate that FPL calibration in-ear and in a coupler leads to larger EFR amplitudes in the low and vHigh frequency bands relative to conventional coupler root-mean-square calibration. However, improvement in variability was modest with FPL calibration. This lack of a statistically significant improvement in variability suggests that the dominant source of variability in EFR amplitude may arise from cochlear and/or neural processing.
Collapse
Affiliation(s)
- Sriram Boothalingam
- Department of Communication Sciences and Disorders, Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders, Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Abigail Bross
- Department of Communication Sciences and Disorders, Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
30
|
Performance of Statistical Indicators in the Objective Detection of Speech-Evoked Envelope Following Responses. Ear Hear 2022; 43:1669-1677. [PMID: 35499293 DOI: 10.1097/aud.0000000000001232] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
OBJECTIVES To assess the sensitivity of statistical indicators used for the objective detection of speech-evoked envelope following responses (EFRs) in infants and adults. DESIGN Twenty-three adults and 21 infants with normal hearing participated in this study. A modified/susa∫i/speech token was presented at 65 dB SPL monoaurally. Presentation level in infants was corrected using in-ear measurements. EFRs were recorded between high forehead and ipsilateral mastoid. Statistical post-processing was completed using F -test, Magnitude-Square Coherence, Rayleigh test, Rayleigh-Moore test, and Hotelling's T 2 test. Logistic regression models assessed the sensitivity of each statistical indicator in both infants and adults as a function of testing duration. RESULTS The Rayleigh-Moore and Rayleigh tests were the most sensitive statistical indicators for speech-evoked EFR detection in infants. Comparatively, Magnitude-Square Coherence and Hotelling's T 2 also provide clinical benefit for infants in all conditions after ~30 minutes of testing, whereas the F -test failed to detect responses to EFRs elicited by vowels with accuracy greater than chance. In contrast, the F-test was the most sensitive for vowel-elicited response detection for adults in short tests (<10 minute) and performed comparatively with the Rayleigh-Moore and Rayleigh test during longer test durations. Decreased sensitivity was observed in infants relative to adults across all testing durations and statistical indicators, but the effects were largest in low frequency stimuli and seemed to be mostly, but not wholly, caused by differences in response amplitude. CONCLUSIONS The choice of statistical indicator significantly impacts the sensitivity of speech-evoked EFR detection. In both groups and for all stimuli, the Rayleigh test and Rayleigh-Moore tests have high sensitivity. Differences in EFR detection are present between infants and adults regardless of statistical indicator; however, these effects are largest for low-frequency EFR stimuli and for amplitude-based statistical indicators.
Collapse
|
31
|
Lu H, Mehta AH, Oxenham AJ. Methodological considerations when measuring and analyzing auditory steady-state responses with multi-channel EEG. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100061. [PMID: 36386860 PMCID: PMC9647176 DOI: 10.1016/j.crneur.2022.100061] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 07/11/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
The auditory steady-state response (ASSR) has been traditionally recorded with few electrodes and is often measured as the voltage difference between mastoid and vertex electrodes (vertical montage). As high-density EEG recording systems have gained popularity, multi-channel analysis methods have been developed to integrate the ASSR signal across channels. The phases of ASSR across electrodes can be affected by factors including the stimulus modulation rate and re-referencing strategy, which will in turn affect the estimated ASSR strength. To explore the relationship between the classical vertical-montage ASSR and whole-scalp ASSR, we applied these two techniques to the same data to estimate the strength of ASSRs evoked by tones with sinusoidal amplitude modulation rates of around 40, 100, and 200 Hz. The whole-scalp methods evaluated in our study, with either linked-mastoid or common-average reference, included ones that assume equal phase across all channels, as well as ones that allow for different phase relationships. The performance of simple averaging was compared to that of more complex methods involving principal component analysis. Overall, the root-mean-square of the phase locking values (PLVs) across all channels provided the most efficient method to detect ASSR across the range of modulation rates tested here.
Collapse
Affiliation(s)
- Hao Lu
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Anahita H. Mehta
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| |
Collapse
|
32
|
Easwar V, Purcell D, Lasarev M, McGrath E, Galloy M. Speech-Evoked Envelope Following Responses in Children and Adults. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4009-4023. [PMID: 36129844 DOI: 10.1044/2022_jslhr-22-00156] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE Envelope following responses (EFRs) could be useful for objectively evaluating audibility of speech in children who are unable to participate in routine clinical tests. However, relative to adults, the characteristics of EFRs elicited by frequency-specific speech and their utility in predicting audibility in children are unknown. METHOD EFRs were elicited by the first (F1) and second and higher formants (F2+) of male-spoken vowels /u/ and /i/ and by fricatives /ʃ/ and /s/ in the token /suʃi/ presented at 15, 35, 55, 65, and 75 dB SPL. The F1, F2+, and fricatives were low-, mid-, and high-frequency dominant, respectively. EFRs were recorded between the vertex and the nape from twenty-three 6- to 17-year-old children and 21 young adults with normal hearing. Sensation levels of stimuli were estimated based on behavioral thresholds. RESULTS In children, amplitude decreased with age for /ʃ/-elicited EFRs but remained stable for low- and mid-frequency stimuli. As a group, EFR amplitude and phase coherence did not differ from that of adults. EFR sensitivity (proportion of audible stimuli detected) and specificity (proportion of inaudible stimuli not detected) did not vary between children and adults. Consistent with previous work, EFR sensitivity increased with stimulus frequency and level. The type of statistical indicator used for EFR detection did not influence accuracy in children. CONCLUSIONS Adultlike EFRs in 6- to 17-year-old typically developing children suggest mature envelope encoding for low- and mid-frequency stimuli. EFR sensitivity and specificity in children, when considering a wide range of stimulus levels and audibility, are ~77% and ~92%, respectively. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21136171.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences and Disorders and Waisman Center, University of Wisconsin-Madison
- National Acoustic Laboratories, Sydney, New South Wales, Australia
| | - David Purcell
- School of Communication Sciences and Disorders, Western University, London, Ontario, Canada
| | - Michael Lasarev
- Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison
| | - Emma McGrath
- Department of Communication Sciences and Disorders and Waisman Center, University of Wisconsin-Madison
| | - Mary Galloy
- Department of Communication Sciences and Disorders and Waisman Center, University of Wisconsin-Madison
| |
Collapse
|
33
|
Lai J, Bidelman GM. Relative changes in the cochlear summating potentials to paired-clicks predict speech-in-noise perception and subjective hearing acuity. JASA EXPRESS LETTERS 2022; 2:102001. [PMID: 36319209 PMCID: PMC9987329 DOI: 10.1121/10.0014815] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Objective assays of human cochlear synaptopathy (CS) have been challenging to develop. It is suspected that relative summating potential (SP) changes are different in listeners with CS. In this proof-of-concept study, young, normal-hearing adults were recruited and assigned to a low/high-risk group for having CS based on their extended audiograms (9-16 kHz). SPs to paired-clicks with varying inter-click intervals isolated non-refractory receptor components of cochlear activity. Abrupt increases in SPs to paired- vs single-clicks were observed in high-risk listeners. Critically, exaggerated SPs predicted speech-in-noise and subjective hearing abilities, suggesting relative SP changes to rapid clicks might help identify putative synaptopathic listeners.
Collapse
Affiliation(s)
- Jesyin Lai
- Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana 47408, USA ,
| |
Collapse
|
34
|
Price CN, Bidelman GM. Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment. Ann N Y Acad Sci 2022; 1516:114-122. [PMID: 35762658 PMCID: PMC9588638 DOI: 10.1111/nyas.14853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Mild cognitive impairment (MCI) commonly results in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musical experience might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel electroencephalograms (EEGs) in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical experience (0-21 years). Musical experience sharpened temporal precision in auditory cortical responses, suggesting that musical experience produces more efficient processing of acoustic features by counteracting age-related neural delays. Additionally, robustness of brainstem responses predicted the severity of cognitive decline, suggesting that early speech representations are sensitive to preclinical stages of cognitive impairment. Our results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.
Collapse
Affiliation(s)
- Caitlin N. Price
- Department of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
35
|
Nakamura T, Dinh TH, Asai M, Nishimaru H, Matsumoto J, Setogawa T, Ichijo H, Honda S, Yamada H, Mihara T, Nishijo H. Characteristics of auditory steady-state responses to different click frequencies in awake intact macaques. BMC Neurosci 2022; 23:57. [PMID: 36180823 PMCID: PMC9524006 DOI: 10.1186/s12868-022-00741-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2022] [Accepted: 09/13/2022] [Indexed: 11/28/2022] Open
Abstract
Background Auditory steady-state responses (ASSRs) are periodic evoked responses to constant periodic auditory stimuli, such as click trains, and are suggested to be associated with higher cognitive functions in humans. Since ASSRs are disturbed in human psychiatric disorders, recording ASSRs from awake intact macaques would be beneficial to translational research as well as an understanding of human brain function and its pathology. However, ASSR has not been reported in awake macaques. Results Electroencephalograms (EEGs) were recorded from awake intact macaques, while click trains at 20–83.3 Hz were binaurally presented. EEGs were quantified based on event-related spectral perturbation (ERSP) and inter-trial coherence (ITC), and ASSRs were significantly demonstrated in terms of ERSP and ITC in awake intact macaques. A comparison of ASSRs among different click train frequencies indicated that ASSRs were maximal at 83.3 Hz. Furthermore, analyses of laterality indices of ASSRs showed that no laterality dominance of ASSRs was observed. Conclusions The present results demonstrated ASSRs, comparable to those in humans, in awake intact macaques. However, there were some differences in ASSRs between macaques and humans: macaques showed maximal ASSR responses to click frequencies higher than 40 Hz that has been reported to elicit maximal responses in humans, and showed no dominant laterality of ASSRs under the electrode montage in this study compared with humans with right hemisphere dominance. The future ASSR studies using awake intact macaques should be aware of these differences, and possible factors, to which these differences were ascribed, are discussed. Supplementary Information The online version contains supplementary material available at 10.1186/s12868-022-00741-9.
Collapse
Affiliation(s)
- Tomoya Nakamura
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Department of Anatomy, Faculty of Medicine, University of Toyama, Toyama, 930-0194, Japan
| | - Trong Ha Dinh
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Department of Physiology, Vietnam Military Medical University, Hanoi, 100000, Vietnam
| | - Makoto Asai
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hiroshi Nishimaru
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Jumpei Matsumoto
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Tsuyoshi Setogawa
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan.,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan
| | - Hiroyuki Ichijo
- Department of Anatomy, Faculty of Medicine, University of Toyama, Toyama, 930-0194, Japan
| | - Sokichi Honda
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hiroshi Yamada
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Takuma Mihara
- Candidate Discovery Science Labs, Drug Discovery Research, Astellas Pharma Inc., Tsukuba, Ibaraki, 305-8585, Japan
| | - Hisao Nishijo
- System Emotional Science, Faculty of Medicine, University of Toyama, Sugitani2630, Toyama, 930-0194, Japan. .,Research Center for Idling Brain Science (RCIBS), University of Toyama, Toyama, 930-0194, Japan.
| |
Collapse
|
36
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
37
|
Abstract
Biology and experience both influence the auditory brain. Sex is one biological factor with pervasive effects on auditory processing. Females process sounds faster and more robustly than males. These differences are linked to hormone differences between the sexes. Athleticism is an experiential factor known to reduce ongoing neural noise, but whether it influences how sounds are processed by the brain is unknown. Furthermore, it is unknown whether sports participation influences auditory processing differently in males and females, given the well-documented sex differences in auditory processing seen in the general population. We hypothesized that athleticism enhances auditory processing and that these enhancements are greater in females. To test these hypotheses, we measured auditory processing in collegiate Division I male and female student-athletes and their non-athlete peers (total n = 1012) using the frequency-following response (FFR). The FFR is a neurophysiological response to sound that reflects the processing of discrete sound features. We measured across-trial consistency of the response in addition to fundamental frequency (F0) and harmonic encoding. We found that athletes had enhanced encoding of the harmonics, which was greatest in the female athletes, and that athletes had more consistent responses than non-athletes. In contrast, F0 encoding was reduced in athletes. The harmonic-encoding advantage in female athletes aligns with previous work linking harmonic encoding strength to female hormone levels and studies showing estrogen as mediating athlete sex differences in other sensory domains. Lastly, persistent deficits in auditory processing from previous concussive and repetitive subconcussive head trauma may underlie the reduced F0 encoding in athletes, as poor F0 encoding is a hallmark of concussion injury.
Collapse
|
38
|
Characterizing Electrophysiological Response Properties of the Peripheral Auditory System Evoked by Phonemes in Normal and Hearing Impaired Ears. Ear Hear 2022; 43:1526-1539. [DOI: 10.1097/aud.0000000000001213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
39
|
Suresh CH, Krishnan A. Frequency-Following Response to Steady-State Vowel in Quiet and Background Noise Among Marching Band Participants With Normal Hearing. Am J Audiol 2022; 31:719-736. [PMID: 35944059 DOI: 10.1044/2022_aja-21-00226] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
OBJECTIVE Human studies enrolling individuals at high risk for cochlear synaptopathy (CS) have reported difficulties in speech perception in adverse listening conditions. The aim of this study is to determine if these individuals show a degradation in the neural encoding of speech in quiet and in the presence of background noise as reflected in neural phase-locking to both envelope periodicity and temporal fine structure (TFS). To our knowledge, there are no published reports that have specifically examined the neural encoding of both envelope periodicity and TFS of speech stimuli (in quiet and in adverse listening conditions) among a sample with loud-sound exposure history who are at risk for CS. METHOD Using scalp-recorded frequency-following response (FFR), the authors evaluated the neural encoding of envelope periodicity (FFRENV) and TFS (FFRTFS) for a steady-state vowel (English back vowel /u/) in quiet and in the presence of speech-shaped noise presented at +5- and 0 dB SNR. Participants were young individuals with normal hearing who participated in the marching band for at least 5 years (high-risk group) and non-marching band group with low-noise exposure history (low-risk group). RESULTS The results showed no group differences in the neural encoding of either the FFRENV or the first formant (F1) in the FFRTFS in quiet and in noise. Paradoxically, the high-risk group demonstrated enhanced representation of F2 harmonics across all stimulus conditions. CONCLUSIONS These results appear to be in line with a music experience-dependent enhancement of F2 harmonics. However, due to sound overexposure in the high-risk group, the role of homeostatic central compensation cannot be ruled out. A larger scale data set with different noise exposure background, longitudinal measurements with an array of behavioral and electrophysiological tests is needed to disentangle the nature of the complex interaction between the effects of central compensatory gain and experience-dependent enhancement.
Collapse
Affiliation(s)
- Chandan H Suresh
- Department of Communication Disorders, California State University, Los Angeles
| | | |
Collapse
|
40
|
Parker A, Skoe E, Tecoulesco L, Naigles L. A Home-Based Approach to Auditory Brainstem Response Measurement: Proof-of-Concept and Practical Guidelines. Semin Hear 2022; 43:177-196. [PMID: 36313050 PMCID: PMC9605808 DOI: 10.1055/s-0042-1756163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023] Open
Abstract
Broad-scale neuroscientific investigations of diverse human populations are difficult to implement. This is because the primary neuroimaging methods (magnetic resonance imaging, electroencephalography [EEG]) historically have not been portable, and participants may be unable or unwilling to travel to test sites. Miniaturization of EEG technologies has now opened the door to neuroscientific fieldwork, allowing for easier access to under-represented populations. Recent efforts to conduct auditory neuroscience outside a laboratory setting are reviewed and then an in-home technique for recording auditory brainstem responses (ABRs) and frequency-following responses (FFRs) in a home setting is introduced. As a proof of concept, we have conducted two in-home electrophysiological studies: one in 27 children aged 6 to 16 years (13 with autism spectrum disorder) and another in 12 young adults aged 18 to 27 years, using portable electrophysiological equipment to record ABRs and FFRs to click and speech stimuli, spanning rural and urban and multiple homes and testers. We validate our fieldwork approach by presenting waveforms and data on latencies and signal-to-noise ratio. Our findings demonstrate the feasibility and utility of home-based ABR/FFR techniques, paving the course for larger fieldwork investigations of populations that are difficult to test or recruit. We conclude this tutorial with practical tips and guidelines for recording ABRs and FFRs in the field and discuss possible clinical and research applications of this approach.
Collapse
Affiliation(s)
- Ashley Parker
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania.
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
| | - Lee Tecoulesco
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| | - Letitia Naigles
- Connecticut Institute for Brain and Cognitive Sciences, University of Connecticut, Storrs, Connecticut
- Cognitive Sciences Program, University of Connecticut, Storrs, Connecticut
- Department of Psychological Sciences, University of Connecticut, Storrs, Connecticut
| |
Collapse
|
41
|
Kegler M, Weissbart H, Reichenbach T. The neural response at the fundamental frequency of speech is modulated by word-level acoustic and linguistic information. Front Neurosci 2022; 16:915744. [PMID: 35942153 PMCID: PMC9355803 DOI: 10.3389/fnins.2022.915744] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Spoken language comprehension requires rapid and continuous integration of information, from lower-level acoustic to higher-level linguistic features. Much of this processing occurs in the cerebral cortex. Its neural activity exhibits, for instance, correlates of predictive processing, emerging at delays of a few 100 ms. However, the auditory pathways are also characterized by extensive feedback loops from higher-level cortical areas to lower-level ones as well as to subcortical structures. Early neural activity can therefore be influenced by higher-level cognitive processes, but it remains unclear whether such feedback contributes to linguistic processing. Here, we investigated early speech-evoked neural activity that emerges at the fundamental frequency. We analyzed EEG recordings obtained when subjects listened to a story read by a single speaker. We identified a response tracking the speaker's fundamental frequency that occurred at a delay of 11 ms, while another response elicited by the high-frequency modulation of the envelope of higher harmonics exhibited a larger magnitude and longer latency of about 18 ms with an additional significant component at around 40 ms. Notably, while the earlier components of the response likely originate from the subcortical structures, the latter presumably involves contributions from cortical regions. Subsequently, we determined the magnitude of these early neural responses for each individual word in the story. We then quantified the context-independent frequency of each word and used a language model to compute context-dependent word surprisal and precision. The word surprisal represented how predictable a word is, given the previous context, and the word precision reflected the confidence about predicting the next word from the past context. We found that the word-level neural responses at the fundamental frequency were predominantly influenced by the acoustic features: the average fundamental frequency and its variability. Amongst the linguistic features, only context-independent word frequency showed a weak but significant modulation of the neural response to the high-frequency envelope modulation. Our results show that the early neural response at the fundamental frequency is already influenced by acoustic as well as linguistic information, suggesting top-down modulation of this neural response.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Tobias Reichenbach
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- *Correspondence: Tobias Reichenbach
| |
Collapse
|
42
|
Bidelman GM, Chow R, Noly-Gandon A, Ryan JD, Bell KL, Rizzi R, Alain C. Transcranial Direct Current Stimulation Combined With Listening to Preferred Music Alters Cortical Speech Processing in Older Adults. Front Neurosci 2022; 16:884130. [PMID: 35873829 PMCID: PMC9298650 DOI: 10.3389/fnins.2022.884130] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
Emerging evidence suggests transcranial direct current stimulation (tDCS) can improve cognitive performance in older adults. Similarly, music listening may improve arousal and stimulate subsequent performance on memory-related tasks. We examined the synergistic effects of tDCS paired with music listening on auditory neurobehavioral measures to investigate causal evidence of short-term plasticity in speech processing among older adults. In a randomized sham-controlled crossover study, we measured how combined anodal tDCS over dorsolateral prefrontal cortex (DLPFC) paired with listening to autobiographically salient music alters neural speech processing in older adults compared to either music listening (sham stimulation) or tDCS alone. EEG assays included both frequency-following responses (FFRs) and auditory event-related potentials (ERPs) to trace neuromodulation-related changes at brainstem and cortical levels. Relative to music without tDCS (sham), we found tDCS alone (without music) modulates the early cortical neural encoding of speech in the time frame of ∼100-150 ms. Whereas tDCS by itself appeared to largely produce suppressive effects (i.e., reducing ERP amplitude), concurrent music with tDCS restored responses to those of the music+sham levels. However, the interpretation of this effect is somewhat ambiguous as this neural modulation could be attributable to a true effect of tDCS or presence/absence music. Still, the combined benefit of tDCS+music (above tDCS alone) was correlated with listeners' education level suggesting the benefit of neurostimulation paired with music might depend on listener demographics. tDCS changes in speech-FFRs were not observed with DLPFC stimulation. Improvements in working memory pre to post session were also associated with better speech-in-noise listening skills. Our findings provide new causal evidence that combined tDCS+music relative to tDCS-alone (i) modulates the early (100-150 ms) cortical encoding of speech and (ii) improves working memory, a cognitive skill which may indirectly bolster noise-degraded speech perception in older listeners.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States,*Correspondence: Gavin M. Bidelman,
| | - Ricky Chow
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
| | | | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada,Department of Psychiatry, University of Toronto, Toronto, ON, Canada,Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Karen L. Bell
- Department of Audiology, San José State University, San Jose, CA, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada,Institute of Medical Science, University of Toronto, Toronto, ON, Canada,Music and Health Science Research Collaboratory, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
43
|
Sensitivity of Vowel-Evoked Envelope Following Responses to Spectra and Level of Preceding Phoneme Context. Ear Hear 2022; 43:1327-1335. [DOI: 10.1097/aud.0000000000001190] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
44
|
Heffner CC, Myers EB, Gracco VL. Impaired perceptual phonetic plasticity in Parkinson's disease. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:511. [PMID: 35931533 PMCID: PMC9299957 DOI: 10.1121/10.0012884] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/22/2022] [Revised: 07/05/2022] [Accepted: 07/06/2022] [Indexed: 06/08/2023]
Abstract
Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.
Collapse
Affiliation(s)
- Christopher C Heffner
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | - Emily B Myers
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, Connecticut 06269, USA
| | | |
Collapse
|
45
|
Liu D, Hu J, Wang S, Fu X, Wang Y, Pugh E, Henderson Sabes J, Wang S. Aging Affects Subcortical Pitch Information Encoding Differently in Humans With Different Language Backgrounds. Front Aging Neurosci 2022; 14:816100. [PMID: 35493942 PMCID: PMC9043765 DOI: 10.3389/fnagi.2022.816100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 03/16/2022] [Indexed: 11/13/2022] Open
Abstract
Aging and language background have been shown to affect pitch information encoding at the subcortical level. To study the individual and compounded effects on subcortical pitch information encoding, Frequency Following Responses were recorded from subjects across various ages and language backgrounds. Differences were found in pitch information encoding strength and accuracy among the groups, indicating that language experience and aging affect accuracy and magnitude of pitch information encoding ability at the subcortical level. Moreover, stronger effects of aging were seen in the magnitude of phase-locking in the native language speaker groups, while language background appears to have more impact on the accuracy of pitch tracking in older adult groups.
Collapse
Affiliation(s)
- Dongxin Liu
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jiong Hu
- Department of Audiology, University of the Pacific, San Francisco, CA, United States
| | - Songjian Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xinxing Fu
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuan Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Esther Pugh
- Department of Otolaryngology, Keck School of Medicine of USC, Los Angeles, CA, United States
| | | | - Shuo Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
46
|
Skoe E, García-Sierra A, Ramírez-Esparza N, Jiang S. Automatic sound encoding is sensitive to language familiarity: Evidence from English monolinguals and Spanish-English bilinguals. Neurosci Lett 2022; 777:136582. [DOI: 10.1016/j.neulet.2022.136582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2021] [Revised: 03/16/2022] [Accepted: 03/17/2022] [Indexed: 11/30/2022]
|
47
|
Bush A, Chrabaszcz A, Peterson V, Saravanan V, Dastolfo-Hromack C, Lipski WJ, Richardson RM. Differentiation of speech-induced artifacts from physiological high gamma activity in intracranial recordings. Neuroimage 2022; 250:118962. [PMID: 35121181 PMCID: PMC8922158 DOI: 10.1016/j.neuroimage.2022.118962] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Revised: 10/07/2021] [Accepted: 02/01/2022] [Indexed: 12/15/2022] Open
Abstract
There is great interest in identifying the neurophysiological underpinnings of speech production. Deep brain stimulation (DBS) surgery is unique in that it allows intracranial recordings from both cortical and subcortical regions in patients who are awake and speaking. The quality of these recordings, however, may be affected to various degrees by mechanical forces resulting from speech itself. Here we describe the presence of speech-induced artifacts in local-field potential (LFP) recordings obtained from mapping electrodes, DBS leads, and cortical electrodes. In addition to expected physiological increases in high gamma (60–200 Hz) activity during speech production, time-frequency analysis in many channels revealed a narrowband gamma component that exhibited a pattern similar to that observed in the speech audio spectrogram. This component was present to different degrees in multiple types of neural recordings. We show that this component tracks the fundamental frequency of the participant’s voice, correlates with the power spectrum of speech and has coherence with the produced speech audio. A vibration sensor attached to the stereotactic frame recorded speech-induced vibrations with the same pattern observed in the LFPs. No corresponding component was identified in any neural channel during the listening epoch of a syllable repetition task. These observations demonstrate how speech-induced vibrations can create artifacts in the primary frequency band of interest. Identifying and accounting for these artifacts is crucial for establishing the validity and reproducibility of speech-related data obtained from intracranial recordings during DBS surgery.
Collapse
Affiliation(s)
- Alan Bush
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA.
| | - Anna Chrabaszcz
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Victoria Peterson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA
| | - Varun Saravanan
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Boston, MA, 02139, USA
| | - Christina Dastolfo-Hromack
- University of Pittsburgh, Department of Communication Science and Disorders, Pittsburgh, PA, 15260, USA; West Virginia University, Communication Science and Disorders, WV 26506, USA
| | - Witold J Lipski
- University of Pittsburgh, Department of Neurological Surgery, Pittsburgh, PA, 15260, USA
| | - R Mark Richardson
- Brain Modulation Lab, Department of Neurosurgery, Massachusetts General Hospital, Boston, MA, 02114, USA; Harvard Medical School, Boston, MA, 02115, USA
| |
Collapse
|
48
|
Early auditory responses to speech sounds in Parkinson's disease: preliminary data. Sci Rep 2022; 12:1019. [PMID: 35046514 PMCID: PMC8770631 DOI: 10.1038/s41598-022-05128-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 01/06/2022] [Indexed: 11/14/2022] Open
Abstract
Parkinson’s disease (PD), as a manifestation of basal ganglia dysfunction, is associated with a number of speech deficits, including reduced voice modulation and vocal output. Interestingly, previous work has shown that participants with PD show an increased feedback-driven motor response to unexpected fundamental frequency perturbations during speech production, and a heightened ability to detect differences in vocal pitch relative to control participants. Here, we explored one possible contributor to these enhanced responses. We recorded the frequency-following auditory brainstem response (FFR) to repetitions of the speech syllable [da] in PD and control participants. Participants with PD displayed a larger amplitude FFR related to the fundamental frequency of speech stimuli relative to the control group. The current preliminary results suggest the dysfunction of the basal ganglia in PD contributes to the early stage of auditory processing and may reflect one component of a broader sensorimotor processing impairment associated with the disease.
Collapse
|
49
|
Schelinski S, Tabas A, von Kriegstein K. Altered processing of communication signals in the subcortical auditory sensory pathway in autism. Hum Brain Mapp 2022; 43:1955-1972. [PMID: 35037743 PMCID: PMC8933247 DOI: 10.1002/hbm.25766] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 11/24/2021] [Accepted: 12/19/2021] [Indexed: 12/17/2022] Open
Abstract
Autism spectrum disorder (ASD) is characterised by social communication difficulties. These difficulties have been mainly explained by cognitive, motivational, and emotional alterations in ASD. The communication difficulties could, however, also be associated with altered sensory processing of communication signals. Here, we assessed the functional integrity of auditory sensory pathway nuclei in ASD in three independent functional magnetic resonance imaging experiments. We focused on two aspects of auditory communication that are impaired in ASD: voice identity perception, and recognising speech‐in‐noise. We found reduced processing in adults with ASD as compared to typically developed control groups (pairwise matched on sex, age, and full‐scale IQ) in the central midbrain structure of the auditory pathway (inferior colliculus [IC]). The right IC responded less in the ASD as compared to the control group for voice identity, in contrast to speech recognition. The right IC also responded less in the ASD as compared to the control group when passively listening to vocal in contrast to non‐vocal sounds. Within the control group, the left and right IC responded more when recognising speech‐in‐noise as compared to when recognising speech without additional noise. In the ASD group, this was only the case in the left, but not the right IC. The results show that communication signal processing in ASD is associated with reduced subcortical sensory functioning in the midbrain. The results highlight the importance of considering sensory processing alterations in explaining communication difficulties, which are at the core of ASD.
Collapse
Affiliation(s)
- Stefanie Schelinski
- Faculty of Psychology, Chair of Cognitive and Clinical Neuroscience, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Alejandro Tabas
- Faculty of Psychology, Chair of Cognitive and Clinical Neuroscience, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Katharina von Kriegstein
- Faculty of Psychology, Chair of Cognitive and Clinical Neuroscience, Technische Universität Dresden, Dresden, Germany.,Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| |
Collapse
|
50
|
Bachmann FL, MacDonald EN, Hjortkjær J. Neural Measures of Pitch Processing in EEG Responses to Running Speech. Front Neurosci 2022; 15:738408. [PMID: 35002597 PMCID: PMC8729880 DOI: 10.3389/fnins.2021.738408] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2021] [Accepted: 11/01/2021] [Indexed: 11/13/2022] Open
Abstract
Linearized encoding models are increasingly employed to model cortical responses to running speech. Recent extensions to subcortical responses suggest clinical perspectives, potentially complementing auditory brainstem responses (ABRs) or frequency-following responses (FFRs) that are current clinical standards. However, while it is well-known that the auditory brainstem responds both to transient amplitude variations and the stimulus periodicity that gives rise to pitch, these features co-vary in running speech. Here, we discuss challenges in disentangling the features that drive the subcortical response to running speech. Cortical and subcortical electroencephalographic (EEG) responses to running speech from 19 normal-hearing listeners (12 female) were analyzed. Using forward regression models, we confirm that responses to the rectified broadband speech signal yield temporal response functions consistent with wave V of the ABR, as shown in previous work. Peak latency and amplitude of the speech-evoked brainstem response were correlated with standard click-evoked ABRs recorded at the vertex electrode (Cz). Similar responses could be obtained using the fundamental frequency (F0) of the speech signal as model predictor. However, simulations indicated that dissociating responses to temporal fine structure at the F0 from broadband amplitude variations is not possible given the high co-variance of the features and the poor signal-to-noise ratio (SNR) of subcortical EEG responses. In cortex, both simulations and data replicated previous findings indicating that envelope tracking on frontal electrodes can be dissociated from responses to slow variations in F0 (relative pitch). Yet, no association between subcortical F0-tracking and cortical responses to relative pitch could be detected. These results indicate that while subcortical speech responses are comparable to click-evoked ABRs, dissociating pitch-related processing in the auditory brainstem may be challenging with natural speech stimuli.
Collapse
Affiliation(s)
- Florine L Bachmann
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| | - Ewen N MacDonald
- Department of Systems Design Engineering, University of Waterloo, Waterloo, ON, Canada
| | - Jens Hjortkjær
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark, Lyngby, Denmark.,Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital - Amager and Hvidovre, Copenhagen, Denmark
| |
Collapse
|