1
|
Alemu RZ, Gorodensky J, Gill S, Cushing SL, Papsin BC, Gordon KA. Binaural responses to a speech syllable are altered in children with hearing loss: Evidence from the frequency-following response. Hear Res 2024; 450:109068. [PMID: 38936172 DOI: 10.1016/j.heares.2024.109068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 05/14/2024] [Accepted: 06/11/2024] [Indexed: 06/29/2024]
Abstract
BACKGROUND & RATIONALE In prior work using non-speech stimuli, children with hearing loss show impaired perception of binaural cues and no significant change in cortical responses to bilateral versus unilateral stimulation. Aims of the present study were to: 1) identify bilateral responses to envelope and spectral components of a speech syllable using the frequency-following response (FFR), 2) determine if abnormalities in the bilateral FFR occur in children with hearing loss, and 3) assess functional consequences of abnormal bilateral FFR responses on perception of binaural timing cues. METHODS A single-syllable speech stimulus (/dα/) was presented to each ear individually and bilaterally. Participants were 9 children with normal hearing (MAge = 12.1 ± 2.5 years) and 6 children with bilateral hearing loss who were experienced bilateral hearing aid users (MAge = 14.0 ± 2.6 years). FFR temporal and spectral peak amplitudes were compared between listening conditions and groups using linear mixed model regression analyses. Behavioral sensitivity to binaural cues were measured by lateralization responses as coming from the right or left side of the head. RESULTS Both temporal and spectral peaks in FFR responses increased in amplitude in the bilateral compared to unilateral listening conditions in children with normal hearing. These measures of "bilateral advantage" were reduced in the group of children with bilateral hearing loss and associated with decreased sensitivity to interaural timing differences. CONCLUSION This study is the first to show that bilateral responses in both temporal and spectral domains can be measured in children using the FFR and is altered in children with hearing loss with consequences to binaural hearing.
Collapse
Affiliation(s)
- R Z Alemu
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada; Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - J Gorodensky
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - S Gill
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - S L Cushing
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada; Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - B C Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada; Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - K A Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada; Department of Otolaryngology-Head & Neck Surgery, University of Toronto, Toronto, ON, Canada; Institute of Medical Science, University of Toronto, Toronto, ON, Canada.
| |
Collapse
|
2
|
Honda CT, Clayards M, Baum SR. Individual differences in the consistency of neural and behavioural responses to speech sounds. Brain Res 2024; 1845:149208. [PMID: 39218332 DOI: 10.1016/j.brainres.2024.149208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Revised: 08/13/2024] [Accepted: 08/27/2024] [Indexed: 09/04/2024]
Abstract
There are documented individual differences among adults in the consistency of speech sound processing, both at neural and behavioural levels. Some adults show more consistent neural responses to speech sounds than others, as measured by an event-related potential called the frequency-following response (FFR); similarly, some adults show more consistent behavioural responses to native speech sounds than others, as measured by two-alternative forced choice (2AFC) and visual analog scaling (VAS) tasks. Adults also differ in how successfully they can perceive non-native speech sounds. Interestingly, it remains unclear whether these differences are related within individuals. In the current study, native English-speaking adults completed native phonetic perception tasks (2AFC and VAS), a non-native (German) phonetic perception task, and an FFR recording session. From these tasks, we derived measures of the consistency of participants' neural and behavioural responses to native speech as well as their non-native perception ability. We then examined the relationships among individual differences in these measures. Analysis of the behavioural measures revealed that more consistent responses to native sounds predicted more successful perception of unfamiliar German sounds. Analysis of neural and behavioural data did not reveal clear relationships between FFR consistency and our phonetic perception measures. This multimodal work furthers our understanding of individual differences in speech processing among adults, and may eventually lead to individualized approaches for enhancing non-native language acquisition in adulthood.
Collapse
Affiliation(s)
- Claire T Honda
- Integrated Program in Neuroscience, McGill University, Montreal, Canada; Centre for Research on Brain, Language and Music, Montreal, Canada.
| | - Meghan Clayards
- Centre for Research on Brain, Language and Music, Montreal, Canada; School of Communication Sciences and Disorders, McGill University, Montreal, Canada; Department of Linguistics, McGill University, Montreal, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, Montreal, Canada; School of Communication Sciences and Disorders, McGill University, Montreal, Canada
| |
Collapse
|
3
|
Ding Y, Jiang H, Xu N, Li L. Inhibitory effects of prepulse stimuli on the electrophysiological responses to startle stimuli in the deep layers of the superior colliculus. Front Neurosci 2024; 18:1446929. [PMID: 39211433 PMCID: PMC11359569 DOI: 10.3389/fnins.2024.1446929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2024] [Accepted: 07/31/2024] [Indexed: 09/04/2024] Open
Abstract
Background Prepulse inhibition (PPI) is a phenomenon where a weak prepulse stimulus inhibits the startle reflex to a subsequent stronger stimulus, which can be induced by various sensory stimulus modalities such as visual, tactile, and auditory stimuli. Methods This study investigates the neural mechanisms underlying auditory PPI by focusing on the deep layers of the superior colliculus (deepSC) and the inferior colliculus (IC) in rats. Nineteen male Sprague-Dawley rats were implanted with electrodes in the left deepSC and the right IC, and electrophysiological recordings were conducted under anesthesia to observe the frequency following responses (FFRs) to startle stimuli with and without prepulse stimuli. Results Our results showed that in the deepSC, narrowband noise as a prepulse stimulus significantly inhibited the envelope component of the startle response, while the fine structure component remained unaffected. However, this inhibitory effect was not observed in the IC or when the prepulse stimulus was a gap. Conclusion These findings suggest that the deepSC plays a crucial role in the neural circuitry of PPI, particularly in the modulation of the envelope component of the startle response. The differential effects of narrowband noise and gap as prepulse stimuli also indicate distinct neural pathways for sound-induced PPI and Gap-PPI. Understanding these mechanisms could provide insights into sensory processing and potential therapeutic targets for disorders involving impaired PPI, such as tinnitus.
Collapse
Affiliation(s)
- Yu Ding
- School of Psychology, Beijing Language and Culture University, Beijing, China
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| | - Huan Jiang
- School of Psychology, Beijing Language and Culture University, Beijing, China
| | - Na Xu
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Peking University, Beijing, China
| |
Collapse
|
4
|
Bidelman GM, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response. Front Neurosci 2024; 18:1422903. [PMID: 39040631 PMCID: PMC11260751 DOI: 10.3389/fnins.2024.1422903] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Accepted: 06/24/2024] [Indexed: 07/24/2024] Open
Abstract
The frequency-following response (FFR) is an evoked potential that provides a neural index of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning and musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed that FFRs stem from a mixture of phase-locked neurogenic activity from the brainstem and cortical structures along the hearing neuraxis. In this study, we challenge this prevailing view by demonstrating that upwards of ~50% of the FFR can originate from an unexpected myogenic source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We measured PAM, transient auditory brainstem responses (ABRs), and sustained frequency-following response (FFR) potentials reflecting myogenic (PAM) and neurogenic (ABR/FFR) responses in young, normal-hearing listeners with varying degrees of musical training. We first establish that PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses 3-4-fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
- Cognitive Science Program, Indiana University, Bloomington, IN, United States
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, United States
- Program in Neuroscience, Indiana University, Bloomington, IN, United States
| | - Kaitlin Baer
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States
- Veterans Affairs Medical Center, Memphis, TN, United States
| |
Collapse
|
5
|
Liang S, Xu J, Liu H, Liang R, Guo Z, Lu M, Liu S, Gao J, Ye Z, Yi H. Automatic Recognition of Auditory Brainstem Response Waveforms Using a Deep Learning-Based Framework. Otolaryngol Head Neck Surg 2024. [PMID: 38822760 DOI: 10.1002/ohn.840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2023] [Revised: 04/24/2024] [Accepted: 05/09/2024] [Indexed: 06/03/2024]
Abstract
OBJECTIVE Recognition of auditory brainstem response (ABR) waveforms may be challenging, particularly for older individuals or those with hearing loss. This study aimed to investigate deep learning frameworks to improve the automatic recognition of ABR waveforms in participants with varying ages and hearing levels. STUDY DESIGN The research used a descriptive study design to collect and analyze pure tone audiometry and ABR data from 100 participants. SETTING The research was conducted at a tertiary academic medical center, specifically at the Clinical Audiology Center of Tsinghua Chang Gung Hospital (Beijing, China). METHODS Data from 100 participants were collected and categorized into four groups based on age and hearing level. Features from both time-domain and frequency-domain ABR signals were extracted and combined with demographic factors, such as age, sex, pure-tone thresholds, stimulus intensity, and original signal sequences to generate feature vectors. An enhanced Wide&Deep model was utilized, incorporating the Light-multi-layer perceptron (MLP) model to train the recognition of ABR waveforms. The recognition accuracy (ACC) of each model was calculated for the overall data set and each group. RESULTS The ACC rates of the Light-MLP model were 97.8%, 97.2%, 93.8%, and 92.0% for Groups 1 to 4, respectively, with a weighted average ACC rate of 95.4%. For the Wide&Deep model, the ACC rates were 93.4%, 90.8%, 92.0%, and 88.3% for Groups 1 to 4, respectively, with a weighted average ACC rate of 91.0%. CONCLUSION Both the Light-MLP model and the Wide&Deep model demonstrated excellent ACC in automatic recognition of ABR waveforms across participants with diverse ages and hearing levels. While the Wide&Deep model's performance was slightly poorer than that of the Light-MLP model, particularly due to the limited sample size, it is anticipated that with an expanded data set, the performance of Wide&Deep model may be further improved.
Collapse
Affiliation(s)
- Sichao Liang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Jia Xu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Haixu Liu
- Institute of Integrated Circuit, Tsinghua University, Beijing, China
| | - Renhe Liang
- Beijing Jingyi Tianhe Intelligent Equipment Co, Ltd, Beijing, China
| | - Zhenping Guo
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Manlin Lu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Sisi Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Juanjuan Gao
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| | - Zuochang Ye
- Institute of Integrated Circuit, Tsinghua University, Beijing, China
| | - Haijin Yi
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tsinghua Changgung Hospital, School of Clinical Medicine, Tsinghua University, Beijing, China
| |
Collapse
|
6
|
McFarlane KA, Sanchez JT. Effects of Temporal Processing on Speech-in-Noise Perception in Middle-Aged Adults. BIOLOGY 2024; 13:371. [PMID: 38927251 PMCID: PMC11200514 DOI: 10.3390/biology13060371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024]
Abstract
Auditory temporal processing is a vital component of auditory stream segregation, or the process in which complex sounds are separated and organized into perceptually meaningful objects. Temporal processing can degrade prior to hearing loss, and is suggested to be a contributing factor to difficulties with speech-in-noise perception in normal-hearing listeners. The current study tested this hypothesis in middle-aged adults-an under-investigated cohort, despite being the age group where speech-in-noise difficulties are first reported. In 76 participants, three mechanisms of temporal processing were measured: peripheral auditory nerve function using electrocochleography, subcortical encoding of periodic speech cues (i.e., fundamental frequency; F0) using the frequency following response, and binaural sensitivity to temporal fine structure (TFS) using a dichotic frequency modulation detection task. Two measures of speech-in-noise perception were administered to explore how contributions of temporal processing may be mediated by different sensory demands present in the speech perception task. This study supported the hypothesis that temporal coding deficits contribute to speech-in-noise difficulties in middle-aged listeners. Poorer speech-in-noise perception was associated with weaker subcortical F0 encoding and binaural TFS sensitivity, but in different contexts, highlighting that diverse aspects of temporal processing are differentially utilized based on speech-in-noise task characteristics.
Collapse
Affiliation(s)
- Kailyn A. McFarlane
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
| | - Jason Tait Sanchez
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL 60208, USA;
- Knowles Hearing Center, Northwestern University, Evanston, IL 60208, USA
- Department of Neurobiology, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
7
|
Bolt E, Giroud N. Auditory Encoding of Natural Speech at Subcortical and Cortical Levels Is Not Indicative of Cognitive Decline. eNeuro 2024; 11:ENEURO.0545-23.2024. [PMID: 38658138 PMCID: PMC11082929 DOI: 10.1523/eneuro.0545-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2023] [Revised: 03/27/2024] [Accepted: 03/29/2024] [Indexed: 04/26/2024] Open
Abstract
More and more patients worldwide are diagnosed with dementia, which emphasizes the urgent need for early detection markers. In this study, we built on the auditory hypersensitivity theory of a previous study-which postulated that responses to auditory input in the subcortex as well as cortex are enhanced in cognitive decline-and examined auditory encoding of natural continuous speech at both neural levels for its indicative potential for cognitive decline. We recruited study participants aged 60 years and older, who were divided into two groups based on the Montreal Cognitive Assessment, one group with low scores (n = 19, participants with signs of cognitive decline) and a control group (n = 25). Participants completed an audiometric assessment and then we recorded their electroencephalography while they listened to an audiobook and click sounds. We derived temporal response functions and evoked potentials from the data and examined response amplitudes for their potential to predict cognitive decline, controlling for hearing ability and age. Contrary to our expectations, no evidence of auditory hypersensitivity was observed in participants with signs of cognitive decline; response amplitudes were comparable in both cognitive groups. Moreover, the combination of response amplitudes showed no predictive value for cognitive decline. These results challenge the proposed hypothesis and emphasize the need for further research to identify reliable auditory markers for the early detection of cognitive decline.
Collapse
Affiliation(s)
- Elena Bolt
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich 8050, Switzerland
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, Zurich 8050, Switzerland
| | - Nathalie Giroud
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich 8050, Switzerland
- International Max Planck Research School on the Life Course (IMPRS LIFE), University of Zurich, Zurich 8050, Switzerland
- Language & Medicine Centre Zurich, Competence Centre of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich 8050, Switzerland
| |
Collapse
|
8
|
Bidelman G, Sisson A, Rizzi R, MacLean J, Baer K. Myogenic artifacts masquerade as neuroplasticity in the auditory frequency-following response (FFR). BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.10.27.564446. [PMID: 37961324 PMCID: PMC10634913 DOI: 10.1101/2023.10.27.564446] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The frequency-following response (FFR) is an evoked potential that provides a "neural fingerprint" of complex sound encoding in the brain. FFRs have been widely used to characterize speech and music processing, experience-dependent neuroplasticity (e.g., learning, musicianship), and biomarkers for hearing and language-based disorders that distort receptive communication abilities. It is widely assumed FFRs stem from a mixture of phase-locked neurogenic activity from brainstem and cortical structures along the hearing neuraxis. Here, we challenge this prevailing view by demonstrating upwards of ~50% of the FFR can originate from a non-neural source: contamination from the postauricular muscle (PAM) vestigial startle reflex. We first establish PAM artifact is present in all ears, varies with electrode proximity to the muscle, and can be experimentally manipulated by directing listeners' eye gaze toward the ear of sound stimulation. We then show this muscular noise easily confounds auditory FFRs, spuriously amplifying responses by 3-4x fold with tandem PAM contraction and even explaining putative FFR enhancements observed in highly skilled musicians. Our findings expose a new and unrecognized myogenic source to the FFR that drives its large inter-subject variability and cast doubt on whether changes in the response typically attributed to neuroplasticity/pathology are solely of brain origin.
Collapse
|
9
|
Jacxsens L, Biot L, Escera C, Gilles A, Cardon E, Van Rompaey V, De Hertogh W, Lammers MJW. Frequency-Following Responses in Sensorineural Hearing Loss: A Systematic Review. J Assoc Res Otolaryngol 2024; 25:131-147. [PMID: 38334887 PMCID: PMC11018579 DOI: 10.1007/s10162-024-00932-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Accepted: 01/18/2024] [Indexed: 02/10/2024] Open
Abstract
PURPOSE This systematic review aims to assess the impact of sensorineural hearing loss (SNHL) on various frequency-following response (FFR) parameters. METHODS Following PRISMA guidelines, a systematic review was conducted using PubMed, Web of Science, and Scopus databases up to January 2023. Studies evaluating FFRs in patients with SNHL and normal hearing controls were included. RESULTS Sixteen case-control studies were included, revealing variability in acquisition parameters. In the time domain, patients with SNHL exhibited prolonged latencies. The specific waves that were prolonged differed across studies. There was no consensus regarding wave amplitude in the time domain. In the frequency domain, focusing on studies that elicited FFRs with stimuli of 170 ms or longer, participants with SNHL displayed a significantly smaller fundamental frequency (F0). Results regarding changes in the temporal fine structure (TFS) were inconsistent. CONCLUSION Patients with SNHL may require more time for processing (speech) stimuli, reflected in prolonged latencies. However, the exact timing of this delay remains unclear. Additionally, when presenting longer stimuli (≥ 170 ms), patients with SNHL show difficulties tracking the F0 of (speech) stimuli. No definite conclusions could be drawn on changes in wave amplitude in the time domain and the TFS in the frequency domain. Patient characteristics, acquisition parameters, and FFR outcome parameters differed greatly across studies. Future studies should be performed in larger and carefully matched subject groups, using longer stimuli presented at the same intensity in dB HL for both groups, or at a carefully determined maximum comfortable loudness level.
Collapse
Affiliation(s)
- Laura Jacxsens
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium.
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium.
| | - Lana Biot
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Carles Escera
- Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, Brainlab - Cognitive, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu, Santa Rosa 39-57, 08950, Esplugues de Llobregat, Catalonia, Spain
| | - Annick Gilles
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Education, Health and Social Work, University College Ghent, Ghent, Belgium
| | - Emilie Cardon
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Vincent Van Rompaey
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Willem De Hertogh
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Marc J W Lammers
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Drie Eikenstraat 655, 2650, Edegem, Belgium
- Resonant Labs Antwerp, Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
10
|
Yu ACL, McAllister R, Mularoni N, To CKS. Brief Report: Atypical Temporal Sensitivity in Coarticulation in Autism: Evidence from Sibilant-Vowel Interaction in Cantonese. J Autism Dev Disord 2024:10.1007/s10803-024-06258-w. [PMID: 38431693 DOI: 10.1007/s10803-024-06258-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/19/2024] [Indexed: 03/05/2024]
Abstract
PURPOSE Atypicalities in the prosodic aspects of speech are commonly considered in clinical assessments of autism. While there is an increasing number of studies using objective measures to assess prosodic deficits, such studies have primarily focused on the intonational and rhythmic aspects of prosody. Little is known about prosodic deficits that are reflected at the segmental level, despite the strong connection between prosody and segmental realization. This study examines the nature of sibilant-vowel coarticulation among male adult native speakers of Cantonese with autism and those without. METHODS Fifteen Cantonese-speaking autistic (ASD) adults (mean age = 25 years) and 23 neuro-typical (NT) adults (mean age = 20 years) participated. Each participant read aloud 42 syllables with a sibilant onset in carrier phrase. Spectral means and variance, skewness and kurtosis were measured, and regressed by vocalic rounding (rounded vs. unrounded), cohort (ASD vs. NT), sibilant duration, and articulation rate. RESULTS While neurotypical participants exhibit sibilant-vowel coarticulation that are sensitive to variation in sibilant duration, autistic participants show no sensitivity to segmental temporal changes. CONCLUSION These findings point to the potential for atypicalities in prosody-segment interaction as an important characteristic of autistic speech.
Collapse
Affiliation(s)
| | | | | | - Carol K S To
- The University of Hong Kong, Hong Kong SAR, China.
| |
Collapse
|
11
|
Schüller A, Schilling A, Krauss P, Reichenbach T. The Early Subcortical Response at the Fundamental Frequency of Speech Is Temporally Separated from Later Cortical Contributions. J Cogn Neurosci 2024; 36:475-491. [PMID: 38165737 DOI: 10.1162/jocn_a_02103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.
Collapse
Affiliation(s)
| | | | - Patrick Krauss
- Friedrich-Alexander-Universität Erlangen-Nürnberg
- Universitätsklinikum Erlangen
| | | |
Collapse
|
12
|
Parida S, Yurasits K, Cancel VE, Zink ME, Mitchell C, Ziliak MC, Harrison AV, Bartlett EL, Parthasarathy A. Rapid and objective assessment of auditory temporal processing using dynamic amplitude-modulated stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577641. [PMID: 38352339 PMCID: PMC10862703 DOI: 10.1101/2024.01.28.577641] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
Auditory neural coding of speech-relevant temporal cues can be noninvasively probed using envelope following responses (EFRs), neural ensemble responses phase-locked to the stimulus amplitude envelope. EFRs emphasize different neural generators, such as the auditory brainstem or auditory cortex, by altering the temporal modulation rate of the stimulus. EFRs can be an important diagnostic tool to assess auditory neural coding deficits that go beyond traditional audiometric estimations. Existing approaches to measure EFRs use discrete amplitude modulated (AM) tones of varying modulation frequencies, which is time consuming and inefficient, impeding clinical translation. Here we present a faster and more efficient framework to measure EFRs across a range of AM frequencies using stimuli that dynamically vary in modulation rates, combined with spectrally specific analyses that offer optimal spectrotemporal resolution. EFRs obtained from several species (humans, Mongolian gerbils, Fischer-344 rats, and Cba/CaJ mice) showed robust, high-SNR tracking of dynamic AM trajectories (up to 800Hz in humans, and 1.4 kHz in rodents), with a fivefold decrease in recording time and thirtyfold increase in spectrotemporal resolution. EFR amplitudes between dynamic AM stimuli and traditional discrete AM tokens within the same subjects were highly correlated (94% variance explained) across species. Hence, we establish a time-efficient and spectrally specific approach to measure EFRs. These results could yield novel clinical diagnostics for precision audiology approaches by enabling rapid, objective assessment of temporal processing along the entire auditory neuraxis.
Collapse
Affiliation(s)
- Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Kimberly Yurasits
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Victoria E. Cancel
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Maggie E. Zink
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Claire Mitchell
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Meredith C. Ziliak
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
| | - Audrey V. Harrison
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
| | - Edward L. Bartlett
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
- Purdue Institute for Integrative Neuroscience, Purdue University, West Lafayette, IN, USA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
- Department of BioEngineering, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Otolaryngology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Mao X, Zhang Z, Yang Y, Chen Y, Wang Y, Wang W. Characteristics of different Mandarin pronunciation element perception: evidence based on a multifeature paradigm for recording MMN and P3a components of phonemic changes in speech sounds. Front Neurosci 2024; 17:1277129. [PMID: 38264493 PMCID: PMC10804857 DOI: 10.3389/fnins.2023.1277129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 12/18/2023] [Indexed: 01/25/2024] Open
Abstract
Background As a tonal language, Mandarin Chinese has the following pronunciation elements for each syllable: the vowel, consonant, tone, duration, and intensity. Revealing the characteristics of auditory-related cortical processing of these different pronunciation elements is interesting. Methods A Mandarin pronunciation multifeature paradigm was designed, during which a standard stimulus and five different phonemic deviant stimuli were presented. The electroencephalogram (EEG) data were recorded with 256-electrode high-density EEG equipment. Time-domain and source localization analyses were conducted to demonstrate waveform characteristics and locate the sources of the cortical processing of mismatch negativity (MMN) and P3a components following different stimuli. Results Vowel and consonant differences elicited distinct MMN and P3a components, but tone and duration differences did not. Intensity differences elicited distinct MMN components but not P3a components. For MMN and P3a components, the activated cortical areas were mainly in the frontal-temporal lobe. However, the regions and intensities of the cortical activation were significantly different among the components for the various deviant stimuli. The activated cortical areas of the MMN and P3a components elicited by vowels and consonants seemed to be larger and show more intense activation. Conclusion The auditory processing centers use different auditory-related cognitive resources when processing different Mandarin pronunciation elements. Vowels and consonants carry more information for speech comprehension; moreover, more neurons in the cortex may be involved in the recognition and cognitive processing of these elements.
Collapse
Affiliation(s)
- Xiang Mao
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| | - Ziyue Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| | - Yijing Yang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| | - Yu Chen
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| | - Yue Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| | - Wei Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
- Institute of Otolaryngology of Tianjin, Tianjin, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin, China
- Otolaryngology Clinical Quality Control Centre, Tianjin, China
| |
Collapse
|
14
|
Bachmann FL, Kulasingham JP, Eskelund K, Enqvist M, Alickovic E, Innes-Brown H. Extending Subcortical EEG Responses to Continuous Speech to the Sound-Field. Trends Hear 2024; 28:23312165241246596. [PMID: 38738341 DOI: 10.1177/23312165241246596] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/14/2024] Open
Abstract
The auditory brainstem response (ABR) is a valuable clinical tool for objective hearing assessment, which is conventionally detected by averaging neural responses to thousands of short stimuli. Progressing beyond these unnatural stimuli, brainstem responses to continuous speech presented via earphones have been recently detected using linear temporal response functions (TRFs). Here, we extend earlier studies by measuring subcortical responses to continuous speech presented in the sound-field, and assess the amount of data needed to estimate brainstem TRFs. Electroencephalography (EEG) was recorded from 24 normal hearing participants while they listened to clicks and stories presented via earphones and loudspeakers. Subcortical TRFs were computed after accounting for non-linear processing in the auditory periphery by either stimulus rectification or an auditory nerve model. Our results demonstrated that subcortical responses to continuous speech could be reliably measured in the sound-field. TRFs estimated using auditory nerve models outperformed simple rectification, and 16 minutes of data was sufficient for the TRFs of all participants to show clear wave V peaks for both earphones and sound-field stimuli. Subcortical TRFs to continuous speech were highly consistent in both earphone and sound-field conditions, and with click ABRs. However, sound-field TRFs required slightly more data (16 minutes) to achieve clear wave V peaks compared to earphone TRFs (12 minutes), possibly due to effects of room acoustics. By investigating subcortical responses to sound-field speech stimuli, this study lays the groundwork for bringing objective hearing assessment closer to real-life conditions, which may lead to improved hearing evaluations and smart hearing technologies.
Collapse
Affiliation(s)
| | - Joshua P Kulasingham
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | | | - Martin Enqvist
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Emina Alickovic
- Eriksholm Research Centre, Snekkersten, Denmark
- Automatic Control, Department of Electrical Engineering, Linköping University, Linköping, Sweden
| | - Hamish Innes-Brown
- Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
15
|
Matsuba ESM, Prieve BA, Cary E, Pacheco D, Madrid A, McKernan E, Kaplan-Kahn E, Russo N. A Preliminary Study Characterizing Subcortical and Cortical Auditory Processing and Their Relation to Autistic Traits and Sensory Features. J Autism Dev Disord 2024; 54:75-92. [PMID: 36227444 PMCID: PMC9559145 DOI: 10.1007/s10803-022-05773-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/21/2022] [Indexed: 11/23/2022]
Abstract
This study characterizes the subcortical auditory brainstem response (speech-ABR) and cortical auditory processing (P1 and Mismatch Negativity; MMN) to speech sounds and their relationship to autistic traits and sensory features within the same group of autistic children (n = 10) matched on age and non-verbal IQ to their typically developing (TD) peers (n = 21). No speech-ABR differences were noted, but autistic individuals had larger P1 and faster MMN responses. Correlations revealed that larger P1 amplitudes and MMN responses were associated with greater autistic traits and more sensory features. These findings highlight the complexity of the auditory system and its relationships to behaviours in autism, while also emphasizing the importance of measurement and developmental matching.
Collapse
Affiliation(s)
- Erin S. M. Matsuba
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13079 USA
| | - Beth A. Prieve
- Department of Communication Sciences and Disorders, Syracuse University, 1200 Skytop Road, Syracuse, NY 13079 USA
| | - Emily Cary
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13079 USA
| | - Devon Pacheco
- Department of Communication Sciences and Disorders, Syracuse University, 1200 Skytop Road, Syracuse, NY 13079 USA
| | - Angela Madrid
- Department of Communication Sciences and Disorders, Syracuse University, 1200 Skytop Road, Syracuse, NY 13079 USA
| | - Elizabeth McKernan
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13079 USA
| | - Elizabeth Kaplan-Kahn
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13079 USA
| | - Natalie Russo
- Department of Psychology, Syracuse University, 430 Huntington Hall, Syracuse, NY 13079 USA
| |
Collapse
|
16
|
Andrade PE, Müllensiefen D, Andrade OVCA, Dunstan J, Zuk J, Gaab N. Sequence Processing in Music Predicts Reading Skills in Young Readers: A Longitudinal Study. JOURNAL OF LEARNING DISABILITIES 2024; 57:43-60. [PMID: 36935627 DOI: 10.1177/00222194231157722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Musical abilities, both in the pitch and temporal dimension, have been shown to be positively associated with phonological awareness and reading abilities in both children and adults. There is increasing evidence that the relationship between music and language relies primarily on the temporal dimension, including both meter and rhythm. It remains unclear to what extent skill level in these temporal aspects of music may uniquely contribute to the prediction of reading outcomes. A longitudinal design was used to test a group-administered musical sequence transcription task (MSTT). This task was designed to preferentially engage sequence processing skills while controlling for fine-grained pitch discrimination and rhythm in terms of temporal grouping. Forty-five children, native speakers of Portuguese (Mage = 7.4 years), completed the MSTT and a cognitive-linguistic protocol that included visual and auditory working memory tasks, as well as phonological awareness and reading tasks in second grade. Participants then completed reading assessments in third and fifth grades. Longitudinal regression models showed that MSTT and phonological awareness had comparable power to predict reading. The MSTT showed an overall classification accuracy for identifying low-achievement readers in Grades 2, 3, and 5 that was analogous to a comprehensive model including core predictors of reading disability. In addition, MSTT was the variable with the highest loading and the most discriminatory indicator of a phonological factor. These findings carry implications for the role of temporal sequence processing in contributing to the relationship between music and language and the potential use of MSTT as a language-independent, time- and cost-effective tool for the early identification of children at risk of reading disability.
Collapse
|
17
|
Mosconi MW, Stevens CJ, Unruh KE, Shafer R, Elison JT. Endophenotype trait domains for advancing gene discovery in autism spectrum disorder. J Neurodev Disord 2023; 15:41. [PMID: 37993779 PMCID: PMC10664534 DOI: 10.1186/s11689-023-09511-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 11/09/2023] [Indexed: 11/24/2023] Open
Abstract
Autism spectrum disorder (ASD) is associated with a diverse range of etiological processes, including both genetic and non-genetic causes. For a plurality of individuals with ASD, it is likely that the primary causes involve multiple common inherited variants that individually account for only small levels of variation in phenotypic outcomes. This genetic landscape creates a major challenge for detecting small but important pathogenic effects associated with ASD. To address similar challenges, separate fields of medicine have identified endophenotypes, or discrete, quantitative traits that reflect genetic likelihood for a particular clinical condition and leveraged the study of these traits to map polygenic mechanisms and advance more personalized therapeutic strategies for complex diseases. Endophenotypes represent a distinct class of biomarkers useful for understanding genetic contributions to psychiatric and developmental disorders because they are embedded within the causal chain between genotype and clinical phenotype, and they are more proximal to the action of the gene(s) than behavioral traits. Despite their demonstrated power for guiding new understanding of complex genetic structures of clinical conditions, few endophenotypes associated with ASD have been identified and integrated into family genetic studies. In this review, we argue that advancing knowledge of the complex pathogenic processes that contribute to ASD can be accelerated by refocusing attention toward identifying endophenotypic traits reflective of inherited mechanisms. This pivot requires renewed emphasis on study designs with measurement of familial co-variation including infant sibling studies, family trio and quad designs, and analysis of monozygotic and dizygotic twin concordance for select trait dimensions. We also emphasize that clarification of endophenotypic traits necessarily will involve integration of transdiagnostic approaches as candidate traits likely reflect liability for multiple clinical conditions and often are agnostic to diagnostic boundaries. Multiple candidate endophenotypes associated with ASD likelihood are described, and we propose a new focus on the analysis of "endophenotype trait domains" (ETDs), or traits measured across multiple levels (e.g., molecular, cellular, neural system, neuropsychological) along the causal pathway from genes to behavior. To inform our central argument for research efforts toward ETD discovery, we first provide a brief review of the concept of endophenotypes and their application to psychiatry. Next, we highlight key criteria for determining the value of candidate endophenotypes, including unique considerations for the study of ASD. Descriptions of different study designs for assessing endophenotypes in ASD research then are offered, including analysis of how select patterns of results may help prioritize candidate traits in future research. We also present multiple candidate ETDs that collectively cover a breadth of clinical phenomena associated with ASD, including social, language/communication, cognitive control, and sensorimotor processes. These ETDs are described because they represent promising targets for gene discovery related to clinical autistic traits, and they serve as models for analysis of separate candidate domains that may inform understanding of inherited etiological processes associated with ASD as well as overlapping neurodevelopmental disorders.
Collapse
Affiliation(s)
- Matthew W Mosconi
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA.
- Clinical Child Psychology Program, University of Kansas, Lawrence, KS, USA.
| | - Cassandra J Stevens
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
- Clinical Child Psychology Program, University of Kansas, Lawrence, KS, USA
| | - Kathryn E Unruh
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
| | - Robin Shafer
- Schiefelbusch Institute for Life Span Studies and Kansas Center for Autism Research and Training (K-CART), University of Kansas, Lawrence, KS, USA
| | - Jed T Elison
- Institute of Child Development, University of Minnesota, Minneapolis, MN, USA
- Department of Pediatrics, University of Minnesota, Minneapolis, MN, USA
| |
Collapse
|
18
|
Belo J, Clerc M, Schön D. The effect of familiarity on neural tracking of music stimuli is modulated by mind wandering. AIMS Neurosci 2023; 10:319-331. [PMID: 38188009 PMCID: PMC10767062 DOI: 10.3934/neuroscience.2023025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 10/29/2023] [Accepted: 11/06/2023] [Indexed: 01/09/2024] Open
Abstract
One way to investigate the cortical tracking of continuous auditory stimuli is to use the stimulus reconstruction approach. However, the cognitive and behavioral factors impacting this cortical representation remain largely overlooked. Two possible candidates are familiarity with the stimulus and the ability to resist internal distractions. To explore the possible impacts of these two factors on the cortical representation of natural music stimuli, forty-one participants listened to monodic natural music stimuli while we recorded their neural activity. Using the stimulus reconstruction approach and linear mixed models, we found that familiarity positively impacted the reconstruction accuracy of music stimuli and that this effect of familiarity was modulated by mind wandering.
Collapse
Affiliation(s)
- Joan Belo
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
| | - Maureen Clerc
- Athena Project Team, INRIA, Université Côte d'Azur, Nice, France
| | - Daniele Schön
- Aix Marseille University, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France
- Institute for Language, Communication, and the Brain, Aix-en-Provence, France
| |
Collapse
|
19
|
Schüller A, Schilling A, Krauss P, Rampp S, Reichenbach T. Attentional Modulation of the Cortical Contribution to the Frequency-Following Response Evoked by Continuous Speech. J Neurosci 2023; 43:7429-7440. [PMID: 37793908 PMCID: PMC10621774 DOI: 10.1523/jneurosci.1247-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/07/2023] [Accepted: 09/21/2023] [Indexed: 10/06/2023] Open
Abstract
Selective attention to one of several competing speakers is required for comprehending a target speaker among other voices and for successful communication with them. It moreover has been found to involve the neural tracking of low-frequency speech rhythms in the auditory cortex. Effects of selective attention have also been found in subcortical neural activities, in particular regarding the frequency-following response related to the fundamental frequency of speech (speech-FFR). Recent investigations have, however, shown that the speech-FFR contains cortical contributions as well. It remains unclear whether these are also modulated by selective attention. Here we used magnetoencephalography to assess the attentional modulation of the cortical contributions to the speech-FFR. We presented both male and female participants with two competing speech signals and analyzed the cortical responses during attentional switching between the two speakers. Our findings revealed robust attentional modulation of the cortical contribution to the speech-FFR: the neural responses were higher when the speaker was attended than when they were ignored. We also found that, regardless of attention, a voice with a lower fundamental frequency elicited a larger cortical contribution to the speech-FFR than a voice with a higher fundamental frequency. Our results show that the attentional modulation of the speech-FFR does not only occur subcortically but extends to the auditory cortex as well.SIGNIFICANCE STATEMENT Understanding speech in noise requires attention to a target speaker. One of the speech features that a listener can use to identify a target voice among others and attend it is the fundamental frequency, together with its higher harmonics. The fundamental frequency arises from the opening and closing of the vocal folds and is tracked by high-frequency neural activity in the auditory brainstem and in the cortex. Previous investigations showed that the subcortical neural tracking is modulated by selective attention. Here we show that attention affects the cortical tracking of the fundamental frequency as well: it is stronger when a particular voice is attended than when it is ignored.
Collapse
Affiliation(s)
- Alina Schüller
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
- Pattern Recognition Lab, Department Computer Science, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Stefan Rampp
- Department of Neurosurgery, University Hospital Erlangen, 91058 Erlangen, Germany
- Department of Neurosurgery, University Hospital Halle (Saale), 06120 Halle (Saale), Germany
- Department of Neuroradiology, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Tobias Reichenbach
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| |
Collapse
|
20
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
21
|
Patel SP, Winston M, Guilfoyle J, Nicol T, Martin GE, Nayar K, Kraus N, Losh M. Neural Processing of Speech Sounds in ASD and First-Degree Relatives. J Autism Dev Disord 2023; 53:3257-3271. [PMID: 35672616 PMCID: PMC10019095 DOI: 10.1007/s10803-022-05562-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/06/2022] [Indexed: 10/18/2022]
Abstract
Efficient neural encoding of sound plays a critical role in speech and language, and when impaired, may have reverberating effects on communication skills. This study investigated disruptions to neural processing of temporal and spectral properties of speech in individuals with ASD and their parents and found evidence of inefficient temporal encoding of speech sounds in both groups. The ASD group further demonstrated less robust neural representation of spectral properties of speech sounds. Associations between neural processing of speech sounds and language-related abilities were evident in both groups. Parent-child associations were also detected in neural pitch processing. Together, results suggest that atypical neural processing of speech sounds is a heritable ingredient contributing to the ASD language phenotype.
Collapse
Affiliation(s)
- Shivani P Patel
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Molly Winston
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Janna Guilfoyle
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Trent Nicol
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Gary E Martin
- Department of Communication Sciences and Disorders, St. John's University, Staten Island, NY, USA
| | - Kritika Nayar
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Nina Kraus
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA
| | - Molly Losh
- Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, 2240 N Campus Dr, Evanston, IL, 60208, USA.
| |
Collapse
|
22
|
Omidvar S, Mochiatti Guijo L, Duda V, Costa-Faidella J, Escera C, Koravand A. Can auditory evoked responses elicited to click and/or verbal sound identify children with or at risk of central auditory processing disorder: A scoping review. Int J Pediatr Otorhinolaryngol 2023; 171:111609. [PMID: 37393698 DOI: 10.1016/j.ijporl.2023.111609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Revised: 04/26/2023] [Accepted: 06/01/2023] [Indexed: 07/04/2023]
Abstract
BACKGROUND (Central) auditory processing disorders, (C)APDs are clinically identified using behavioral tests. However, changes in attention and motivation may easily affect true identification. Although auditory electrophysiological tests, such as Auditory Brainstem Responses (ABR), are independent of most confounding cognitive factors, there is no consensus that click and/or speech-evoked ABR can be used to identify children with or at-risk of (C)APDs due to heterogeneity among studies. AIMS This study aimed to review the possibility of using ABR evoked by click and/or speech stimuli to identify children with or at risk of (C)APDs. METHODS The online databases of PubMed, Web of Science, Medline, Embase, and CINAHL were explored using combined keywords for all English and French articles published until April 2021. Additional gray literature was also included such as conference abstracts, dissertations, and editorials in ProQuest Dissertations. MAIN CONTRIBUTION Thirteen papers met the eligibility criteria and were included in the scoping review. Fourteen papers were cross-sectional and two were interventional studies. Eleven papers used click stimuli to assess children with/at risk of (C)APDs, and speech stimuli were utilized in the remaining studies. Despite the diversity of the results, especially in click ABR assessments, most studies indicated increases in the wave latencies and/or decreases in the wave amplitudes of click ABR in children with/at risk of (C)APDs. The results of speech ABR assessments were more consistent, as prolongation of the transient components of speech ABR was observed in these children, while sustained components remained almost unchanged. CONCLUSIONS Although both click and speech-evoked ABRs could be used to assess children with (C)APDs, it appears that speech-evoked ABR assessments yield more reliable findings. These findings, however, should be interpreted with caution given the heterogeneity among studies. Well-designed studies on children with confirmed (C)APDs using standard diagnostic and assessment protocols are recommended.
Collapse
Affiliation(s)
- Shaghayegh Omidvar
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada.
| | - Laura Mochiatti Guijo
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada; School of Speech-Language Pathology and Audiology, Sao Paulo State University "Júlio de Mesquita Filho" - UNESP, Marília, SP, Brazil.
| | - Victoria Duda
- École d'orthophonie et d'audiologie, Université de Montréal, Québec, Canada.
| | - Jordi Costa-Faidella
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institute de Recerca Sant Joan de Déu, Esplugues de Llobregat, Catalonia, Spain.
| | - Carless Escera
- Brainlab - Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain; Institute of Neurosciences, University of Barcelona, Catalonia, Spain; Institute de Recerca Sant Joan de Déu, Esplugues de Llobregat, Catalonia, Spain.
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ontario, Canada.
| |
Collapse
|
23
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
24
|
Omidvar S, Duquette-Laplante F, Bursch C, Jutras B, Koravand A. Assessing Auditory Processing in Children with Listening Difficulties: A Pilot Study. J Clin Med 2023; 12:jcm12030897. [PMID: 36769544 PMCID: PMC9917704 DOI: 10.3390/jcm12030897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/25/2023] Open
Abstract
BACKGROUND Auditory processing disorders (APD) may be one of the problems experienced by children with listening difficulties (LiD). The combination of auditory behavioural and electrophysiological tests could help to provide a better understanding of the abilities/disabilities of children with LiD. The current study aimed to quantify the auditory processing abilities and function in children with LiD. METHODS Twenty children, ten with LiD (age = 8.46; SD = 1.39) and ten typically developing (TD) (age = 9.45; SD = 1.57) participated in this study. All children were evaluated with auditory processing tests as well as with attention and phonemic synthesis tasks. Electrophysiological measures were also conducted with click and speech auditory brainstem responses (ABR). RESULTS Children with LiD performed significantly worse than TD children for most behavioural tasks, indicating shortcomings in functional auditory processing. Moreover, the click-ABR wave I amplitude was smaller, and the speech-ABR waves D and E latencies were longer for the LiD children compared to the results of TD children. No significant difference was found when evaluating neural correlates between groups. CONCLUSIONS Combining behavioural testing with click-ABR and speech-ABR can highlight functional and neurophysiological deficiencies in children with learning and listening issues, especially at the brainstem level.
Collapse
Affiliation(s)
- Shaghayegh Omidvar
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
| | - Fauve Duquette-Laplante
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
| | | | - Benoît Jutras
- School of Speech-Language Pathology and Audiology, Université de Montréal, Montreal, QC H3C 3J7, Canada
- Research Centre, CHU Sainte-Justine, Montreal, QC H3T 1C5, Canada
| | - Amineh Koravand
- Audiology and Speech Pathology Program, School of Rehabilitation Sciences, Faculty of Health Sciences, University of Ottawa, Ottawa, ON K1H 8L, Canada
- Correspondence:
| |
Collapse
|
25
|
Mao X, Zhang Z, Chen Y, Wang Y, Yang Y, Wei M, Liu Y, Ma Y, Lin P, Wang W. Quantifying the Influence of Factors on the Accuracy of Speech Perception in Mandarin-Speaking Cochlear Implant Patients. J Clin Med 2023; 12:821. [PMID: 36769470 PMCID: PMC9917954 DOI: 10.3390/jcm12030821] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/26/2022] [Revised: 01/13/2023] [Accepted: 01/17/2023] [Indexed: 01/22/2023] Open
Abstract
Rehabilitation of hearing perception in cochlear implant (CI) patients is a challenging process. A comprehensive analysis of the characteristics of hearing rehabilitation in Mandarin-speaking CI patients was conducted. We measured the aided hearing threshold (AHT) and the speech perception accuracy (SPA) and collected clinical data. A total of 49 CI patients were included. Significant linear relationships existed between the AHT and SPA. The SPA increased by about 5-7% when the AHT decreased by 5 dB. An apparent individual difference in the SPA was observed under the same AHT, which in some patients was lower than the reference value fitted by the regression model. The timing of both of cochlear implantation and rehabilitation training was found to lead to significant improvement in SPA. The SPA increases by 2.1-3.6% per year of cochlear implantation and 0.7-1.5% per year of rehabilitation training. The time of auditory deprivation can significantly reduce the SPA by about 1.0-1.6% per year. The SPA was still poor in some CI patients when the hearing compensation seemed satisfying. Early cochlear implantation and post-operative rehabilitation are essential for recovery of the patient's SPA if the indications for cochlear implantation are met.
Collapse
Affiliation(s)
- Xiang Mao
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Ziyue Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Yu Chen
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Yue Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Yijing Yang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Mei Wei
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Yao Liu
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Yuanxu Ma
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Peng Lin
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| | - Wei Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, No. 24 Fukang Road, Nankai District, Tianjin 300192, China
- Institute of Otolaryngology of Tianjin, Tianjin 300192, China
- Key Laboratory of Auditory Speech and Balance Medicine, Tianjin 300192, China
- Key Medical Discipline of Tianjin (Otolaryngology), Tianjin 300192, China
- Otolaryngology Clinical Quality Control Centre, Tianjin 300192, China
| |
Collapse
|
26
|
Speech auditory brainstem response in audiological practice: a systematic review. Eur Arch Otorhinolaryngol 2023; 280:2099-2118. [PMID: 36651959 DOI: 10.1007/s00405-023-07830-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2022] [Accepted: 01/07/2023] [Indexed: 01/19/2023]
Abstract
BACKGROUND Speech-ABR is an auditory brainstem response that evaluates the integrity of the temporal and spectral coding of speech in the upper levels of the brainstem. It reflects the acoustic properties of the stimulus used and consists of seven major waves. Waves V and A represent the onset of the response; wave C transition region; D, E, and F waves periodic region (frequency following response); and wave O reflects the offset of the response. PURPOSE The aim of this study is to evaluate the clinical availability of the speech-ABR procedure through a literature review. METHODS Literature search was conducted in Pubmed, Google Scholar, Scopus and Science Direct databases. Clinical studies of the last 15 years have been included in this review and 60 articles have been reviewed. RESULTS As a result of the articles reviewed, it was seen that most of the studies on speech ABR were conducted with children and young people and generally focused on latency analysis measurements. Most used stimulus is the /da/ syllable. CONCLUSIONS Speech ABR can objectively measure the auditory cues important for speech recognition and has many clinical applications. It can be used as a biomarker for auditory processing disorders, learning disorders, dyslexia, otitis media, hearing loss, language disorders and phonological disorders. S-ABR is an effective procedure that can be used in speech and language evaluations in people with hearing aids or cochlear implant. It may also be of benefit to the aging auditory system's ability to encode temporal cues.
Collapse
|
27
|
Mai G, Howell P. The possible role of early-stage phase-locked neural activities in speech-in-noise perception in human adults across age and hearing loss. Hear Res 2023; 427:108647. [PMID: 36436293 DOI: 10.1016/j.heares.2022.108647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/26/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022]
Abstract
Ageing affects auditory neural phase-locked activities which could increase the challenges experienced during speech-in-noise (SiN) perception by older adults. However, evidence for how ageing affects SiN perception through these phase-locked activities is still lacking. It is also unclear whether influences of ageing on phase-locked activities in response to different acoustic properties have similar or different mechanisms to affect SiN perception. The present study addressed these issues by measuring early-stage phase-locked encoding of speech under quiet and noisy backgrounds (speech-shaped noise (SSN) and multi-talker babbles) in adults across a wide age range (19-75 years old). Participants passively listened to a repeated vowel whilst the frequency-following response (FFR) to fundamental frequency that has primary subcortical sources and cortical phase-locked response to slowly-fluctuating acoustic envelopes were recorded. We studied how these activities are affected by age and age-related hearing loss and how they are related to SiN performances (word recognition in sentences in noise). First, we found that the effects of age and hearing loss differ for the FFR and slow-envelope phase-locking. FFR was significantly decreased with age and high-frequency (≥ 2 kHz) hearing loss but increased with low-frequency (< 2 kHz) hearing loss, whilst the slow-envelope phase-locking was significantly increased with age and hearing loss across frequencies. Second, potential relationships between the types of phase-locked activities and SiN perception performances were also different. We found that the FFR and slow-envelope phase-locking positively corresponded to SiN performance under multi-talker babbles and SSN, respectively. Finally, we investigated how age and hearing loss affected SiN perception through phase-locked activities via mediation analyses. We showed that both types of activities significantly mediated the relation between age/hearing loss and SiN perception but in distinct manners. Specifically, FFR decreased with age and high-frequency hearing loss which in turn contributed to poorer SiN performance but increased with low-frequency hearing loss which in turn contributed to better SiN performance under multi-talker babbles. Slow-envelope phase-locking increased with age and hearing loss which in turn contributed to better SiN performance under both SSN and multi-talker babbles. Taken together, the present study provided evidence for distinct neural mechanisms of early-stage auditory phase-locked encoding of different acoustic properties through which ageing affects SiN perception.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK; Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK; Department of Experimental Psychology, University College London, London WC1H 0AP, UK.
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
28
|
Sreedhar A, Sumesh J, Ravikumar M, Konadath S. Speech ABR Findings in Auto Rickshaw Drivers Exposed to Occupational Noise. Indian J Otolaryngol Head Neck Surg 2022; 74:3987-3992. [PMID: 36742627 PMCID: PMC9895701 DOI: 10.1007/s12070-021-02792-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2021] [Accepted: 07/25/2021] [Indexed: 02/07/2023] Open
Abstract
Most of the persons with noise exposure will have clinically normal hearing threshold while experiencing reduced speech comprehension. The motive of this study is to assess the impact of occupational noise on the encoding of speech stimuli in the auditory system in the auto-rickshaw drivers and compare the auditory brainstem responses (ABR) using speech stimuli with that of controls. The study was done in experimental design, where speech evoked ABR was measured in 21 auto-drivers who were continuously exposed to higher levels of occupational noise, and they were compared to the results of 37 individuals who were not exposed to noise. Speech ABR was administered in both the groups and the absolute latencies and amplitudes of the peaks V, A, C, D, E, F and O were compared. The results revealed that there is a statistically significant difference (p < 0.05) in the latency of peak V (F(1,32) = 6.13, p < 0.05, η p 2 = 0.12) and peak A (F(1,32) = 4.03, p < 0.05, η p 2 = 0.08) between the control and experimental group. Similarly, there was a statistically significant difference seen in the amplitude of peak D (F(1,32) = 6.38, p < 0.05, η p 2 = 0.12) and peak F (F(1,32) = 7.97, p < 0.05, η p 2 = 0.15). Acknowledging how the speech signals are coded in the brainstem may aid in the timely detection and intervention of hearing-related issues, even in individuals having normal hearing acuity. The results indicate that there is damage at the level of the brainstem which will lead to poor speech understanding in those who are exposed to occupational noise. These indicators are present even before routine audiometry indicates a hearing loss.
Collapse
Affiliation(s)
- Adithya Sreedhar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka 570006 India
| | - Jijinu Sumesh
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka 570006 India
| | - Mamatha Ravikumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka 570006 India
| | - Sreeraj Konadath
- Department of Audiology, All India Institute of Speech and Hearing, Mysore, Karnataka 570006 India
| |
Collapse
|
29
|
Hussain RO, Kumar P, Singh NK. Subcortical and Cortical Electrophysiological Measures in Children With Speech-in-Noise Deficits Associated With Auditory Processing Disorders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4454-4468. [PMID: 36279585 DOI: 10.1044/2022_jslhr-22-00094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The aim of this study was to analyze the subcortical and cortical auditory evoked potentials for speech stimuli in children with speech-in-noise (SIN) deficits associated with auditory processing disorder (APD) without any reading or language deficits. METHOD The study included 20 children in the age range of 9-13 years. Ten children were recruited to the APD group; they had below-normal scores on the speech-perception-in-noise test and were diagnosed as having APD. The remaining 10 were typically developing (TD) children and were recruited to the TD group. Speech-evoked subcortical (brainstem) and cortical (auditory late latency) responses were recorded and compared across both groups. RESULTS The results showed a statistically significant reduction in the amplitudes of the subcortical potentials (both for stimulus in quiet and in noise) and the magnitudes of the spectral components (fundamental frequency and the second formant) in children with SIN deficits in the APD group compared to the TD group. In addition, the APD group displayed enhanced amplitudes of the cortical potentials compared to the TD group. CONCLUSION Children with SIN deficits associated with APD exhibited impaired coding/processing of the auditory information at the level of the brainstem and the auditory cortex. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357735.
Collapse
Affiliation(s)
| | - Prawin Kumar
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| | - Niraj Kumar Singh
- Department of Audiology, All India Institute of Speech and Hearing, Mysore
| |
Collapse
|
30
|
Lu H, Mehta AH, Oxenham AJ. Methodological considerations when measuring and analyzing auditory steady-state responses with multi-channel EEG. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100061. [PMID: 36386860 PMCID: PMC9647176 DOI: 10.1016/j.crneur.2022.100061] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 07/11/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022] Open
Abstract
The auditory steady-state response (ASSR) has been traditionally recorded with few electrodes and is often measured as the voltage difference between mastoid and vertex electrodes (vertical montage). As high-density EEG recording systems have gained popularity, multi-channel analysis methods have been developed to integrate the ASSR signal across channels. The phases of ASSR across electrodes can be affected by factors including the stimulus modulation rate and re-referencing strategy, which will in turn affect the estimated ASSR strength. To explore the relationship between the classical vertical-montage ASSR and whole-scalp ASSR, we applied these two techniques to the same data to estimate the strength of ASSRs evoked by tones with sinusoidal amplitude modulation rates of around 40, 100, and 200 Hz. The whole-scalp methods evaluated in our study, with either linked-mastoid or common-average reference, included ones that assume equal phase across all channels, as well as ones that allow for different phase relationships. The performance of simple averaging was compared to that of more complex methods involving principal component analysis. Overall, the root-mean-square of the phase locking values (PLVs) across all channels provided the most efficient method to detect ASSR across the range of modulation rates tested here.
Collapse
Affiliation(s)
- Hao Lu
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Anahita H. Mehta
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| | - Andrew J. Oxenham
- Department of Psychology, University of Minnesota, 75 East River Parkway, Minneapolis, MN, 55455, USA
| |
Collapse
|
31
|
Montes-Lourido P, Kar M, Pernia M, Parida S, Sadagopan S. Updates to the guinea pig animal model for in-vivo auditory neuroscience in the low-frequency hearing range. Hear Res 2022; 424:108603. [PMID: 36099806 PMCID: PMC9922531 DOI: 10.1016/j.heares.2022.108603] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Revised: 08/29/2022] [Accepted: 09/03/2022] [Indexed: 02/08/2023]
Abstract
For gaining insight into general principles of auditory processing, it is critical to choose model organisms whose set of natural behaviors encompasses the processes being investigated. This reasoning has led to the development of a variety of animal models for auditory neuroscience research, such as guinea pigs, gerbils, chinchillas, rabbits, and ferrets; but in recent years, the availability of cutting-edge molecular tools and other methodologies in the mouse model have led to waning interest in these unique model species. As laboratories increasingly look to include in-vivo components in their research programs, a comprehensive description of procedures and techniques for applying some of these modern neuroscience tools to a non-mouse small animal model would enable researchers to leverage unique model species that may be best suited for testing their specific hypotheses. In this manuscript, we describe in detail the methods we have developed to apply these tools to the guinea pig animal model to answer questions regarding the neural processing of complex sounds, such as vocalizations. We describe techniques for vocalization acquisition, behavioral testing, recording of auditory brainstem responses and frequency-following responses, intracranial neural signals including local field potential and single unit activity, and the expression of transgenes allowing for optogenetic manipulation of neural activity, all in awake and head-fixed guinea pigs. We demonstrate the rich datasets at the behavioral and electrophysiological levels that can be obtained using these techniques, underscoring the guinea pig as a versatile animal model for studying complex auditory processing. More generally, the methods described here are applicable to a broad range of small mammals, enabling investigators to address specific auditory processing questions in model organisms that are best suited for answering them.
Collapse
Affiliation(s)
- Pilar Montes-Lourido
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Manaswini Kar
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA
| | - Marianny Pernia
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Satyabrata Parida
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - Srivatsun Sadagopan
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA; Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA; Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, PA, USA; Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA; Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, USA.
| |
Collapse
|
32
|
Krizman J, Bonacina S, Colegrove D, Otto-Meyer R, Nicol T, Kraus N. Athleticism and sex impact neural processing of sound. Sci Rep 2022; 12:15181. [PMID: 36071146 PMCID: PMC9452578 DOI: 10.1038/s41598-022-19216-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 08/25/2022] [Indexed: 11/08/2022] Open
Abstract
Biology and experience both influence the auditory brain. Sex is one biological factor with pervasive effects on auditory processing. Females process sounds faster and more robustly than males. These differences are linked to hormone differences between the sexes. Athleticism is an experiential factor known to reduce ongoing neural noise, but whether it influences how sounds are processed by the brain is unknown. Furthermore, it is unknown whether sports participation influences auditory processing differently in males and females, given the well-documented sex differences in auditory processing seen in the general population. We hypothesized that athleticism enhances auditory processing and that these enhancements are greater in females. To test these hypotheses, we measured auditory processing in collegiate Division I male and female student-athletes and their non-athlete peers (total n = 1012) using the frequency-following response (FFR). The FFR is a neurophysiological response to sound that reflects the processing of discrete sound features. We measured across-trial consistency of the response in addition to fundamental frequency (F0) and harmonic encoding. We found that athletes had enhanced encoding of the harmonics, which was greatest in the female athletes, and that athletes had more consistent responses than non-athletes. In contrast, F0 encoding was reduced in athletes. The harmonic-encoding advantage in female athletes aligns with previous work linking harmonic encoding strength to female hormone levels and studies showing estrogen as mediating athlete sex differences in other sensory domains. Lastly, persistent deficits in auditory processing from previous concussive and repetitive subconcussive head trauma may underlie the reduced F0 encoding in athletes, as poor F0 encoding is a hallmark of concussion injury.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Silvia Bonacina
- Auditory Neuroscience Laboratory
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Danielle Colegrove
- Department of Sports Medicine, Northwestern Medicine, Chicago, IL, 60611, USA
| | - Rembrandt Otto-Meyer
- Auditory Neuroscience Laboratory
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, .
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA.
- Department of Neurobiology, Northwestern University, Evanston, IL, 60208, USA.
- Department of Otolaryngology, Northwestern University, Chicago, IL, 60611, USA.
| |
Collapse
|
33
|
Jacxsens L, De Pauw J, Cardon E, van der Wal A, Jacquemin L, Gilles A, Michiels S, Van Rompaey V, Lammers MJW, De Hertogh W. Brainstem evoked auditory potentials in tinnitus: A best-evidence synthesis and meta-analysis. Front Neurol 2022; 13:941876. [PMID: 36071905 PMCID: PMC9441610 DOI: 10.3389/fneur.2022.941876] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Accepted: 08/02/2022] [Indexed: 11/29/2022] Open
Abstract
Introduction Accumulating evidence suggests a role of the brainstem in tinnitus generation and modulation. Several studies in chronic tinnitus patients have reported latency and amplitude changes of the different peaks of the auditory brainstem response, possibly reflecting neural changes or altered activity. The aim of the systematic review was to assess if alterations within the brainstem of chronic tinnitus patients are reflected in short- and middle-latency auditory evoked potentials (AEPs). Methods A systematic review was performed and reported according to the PRISMA guidelines. Studies evaluating short- and middle-latency AEPs in tinnitus patients and controls were included. Two independent reviewers conducted the study selection, data extraction, and risk of bias assessment. Meta-analysis was performed using a multivariate meta-analytic model. Results Twenty-seven cross-sectional studies were included. Multivariate meta-analysis revealed that in tinnitus patients with normal hearing, significantly longer latencies of auditory brainstem response (ABR) waves I (SMD = 0.66 ms, p < 0.001), III (SMD = 0.43 ms, p < 0.001), and V (SMD = 0.47 ms, p < 0.01) are present. The results regarding possible changes in middle-latency responses (MLRs) and frequency-following responses (FFRs) were inconclusive. Discussion The discovered changes in short-latency AEPs reflect alterations at brainstem level in tinnitus patients. More specifically, the prolonged ABR latencies could possibly be explained by high frequency sensorineural hearing loss, or other modulating factors such as cochlear synaptopathy or somatosensory tinnitus generators. The question whether middle-latency AEP changes, representing subcortical level of the auditory pathway, are present in tinnitus still remains unanswered. Future studies should identify and correctly deal with confounding factors, such as age, gender and the presence of somatosensory tinnitus components. Systematic review registration https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021243687, PROSPERO [CRD42021243687].
Collapse
Affiliation(s)
- Laura Jacxsens
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
| | - Joke De Pauw
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Emilie Cardon
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Annemarie van der Wal
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Orofacial Pain and Dysfunction, Academic Centre for Dentistry Amsterdam (ACTA), University of Amsterdam, Amsterdam, Netherlands
| | - Laure Jacquemin
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Annick Gilles
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
- Department of Education, Health and Social Work, University College Ghent, Ghent, Belgium
| | - Sarah Michiels
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Faculty of Rehabilitation Sciences, REVAL, University of Hasselt, Hasselt, Belgium
| | - Vincent Van Rompaey
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Marc J W Lammers
- Department of Otorhinolaryngology, Head and Neck Surgery, Antwerp University Hospital (UZA), Edegem, Belgium
- Department of Translational Neurosciences, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| | - Willem De Hertogh
- Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Health Sciences, University of Antwerp, Antwerp, Belgium
| |
Collapse
|
34
|
Ananthakrishnan S, Luo X. Effects of Temporal Envelope Cutoff Frequency, Number of Channels, and Carrier Type on Brainstem Neural Representation of Pitch in Vocoded Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3146-3164. [PMID: 35944032 DOI: 10.1044/2022_jslhr-21-00576] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
PURPOSE The objective of this study was to determine if and how the subcortical neural representation of pitch cues in listeners with normal hearing is affected by systematic manipulation of vocoder parameters. METHOD This study assessed the effects of temporal envelope cutoff frequency (50 and 500 Hz), number of channels (1-32), and carrier type (sine-wave and noise-band) on brainstem neural representation of fundamental frequency (f o) in frequency-following responses (FFRs) to vocoded vowels of 15 young adult listeners with normal hearing. RESULTS Results showed that FFR f o strength (quantified as absolute f o magnitude divided by noise floor [NF] magnitude) significantly improved with 500-Hz vs. 50-Hz temporal envelopes for all channel numbers and both carriers except the 1-channel noise-band vocoder. FFR f o strength with 500-Hz temporal envelopes significantly improved when the channel number increased from 1 to 2, but it either declined (sine-wave vocoders) or saturated (noise-band vocoders) when the channel number increased from 4 to 32. FFR f o strength with 50-Hz temporal envelopes was similarly small for both carriers with all channel numbers, except for a significant improvement with the 16-channel sine-wave vocoder. With 500-Hz temporal envelopes, FFR f o strength was significantly greater for sine-wave vocoders than for noise-band vocoders with channel numbers 1-8; no significant differences were seen with 16 and 32 channels. With 50-Hz temporal envelopes, the carrier effect was only observed with 16 channels. In contrast, there was no significant carrier effect for the absolute f o magnitude. Compared to sine-wave vocoders, noise-band vocoders had a higher NF and thus lower relative FFR f o strength. CONCLUSIONS It is important to normalize the f o magnitude relative to the NF when analyzing the FFRs to vocoded speech. The physiological findings reported here may result from the availability of f o-related temporal periodicity and spectral sidelobes in vocoded signals and should be considered when selecting vocoder parameters and interpreting results in future physiological studies. In general, the dependence of brainstem neural phase-locking strength to f o on vocoder parameters may confound the comparison of pitch-related behavioral results across different vocoder designs.
Collapse
Affiliation(s)
| | - Xin Luo
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe
| |
Collapse
|
35
|
Richardson ML, Guérit F, Gransier R, Wouters J, Carlyon RP, Middlebrooks JC. Temporal Pitch Sensitivity in an Animal Model: Psychophysics and Scalp Recordings : Temporal Pitch Sensitivity in Cat. J Assoc Res Otolaryngol 2022; 23:491-512. [PMID: 35668206 PMCID: PMC9437162 DOI: 10.1007/s10162-022-00849-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 04/11/2022] [Indexed: 01/28/2023] Open
Abstract
Cochlear implant (CI) users show limited sensitivity to the temporal pitch conveyed by electric stimulation, contributing to impaired perception of music and of speech in noise. Neurophysiological studies in cats suggest that this limitation is due, in part, to poor transmission of the temporal fine structure (TFS) by the brainstem pathways that are activated by electrical cochlear stimulation. It remains unknown, however, how that neural limit might influence perception in the same animal model. For that reason, we developed non-invasive psychophysical and electrophysiological measures of temporal (i.e., non-spectral) pitch processing in the cat. Normal-hearing (NH) cats were presented with acoustic pulse trains consisting of band-limited harmonic complexes that simulated CI stimulation of the basal cochlea while removing cochlear place-of-excitation cues. In the psychophysical procedure, trained cats detected changes from a base pulse rate to a higher pulse rate. In the scalp-recording procedure, the cortical-evoked acoustic change complex (ACC) and brainstem-generated frequency following response (FFR) were recorded simultaneously in sedated cats for pulse trains that alternated between the base and higher rates. The range of perceptual sensitivity to temporal pitch broadly resembled that of humans but was shifted to somewhat higher rates. The ACC largely paralleled these perceptual patterns, validating its use as an objective measure of temporal pitch sensitivity. The phase-locked FFR, in contrast, showed strong brainstem encoding for all tested pulse rates. These measures demonstrate the cat's perceptual sensitivity to pitch in the absence of cochlear-place cues and may be valuable for evaluating neural mechanisms of temporal pitch perception in the feline animal model of stimulation by a CI or novel auditory prostheses.
Collapse
Affiliation(s)
- Matthew L Richardson
- Department of Otolaryngology, Center for Hearing Research, University of California at Irvine, Irvine, CA, USA.
| | - François Guérit
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - Robin Gransier
- Department of Neurosciences, ExpORL, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, ExpORL, KU Leuven, Leuven, Belgium
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - John C Middlebrooks
- Department of Otolaryngology, Center for Hearing Research, University of California at Irvine, Irvine, CA, USA
- Departments of Neurobiology & Behavior, Biomedical Engineering, Cognitive Sciences, University of California at Irvine, Irvine, CA, USA
| |
Collapse
|
36
|
Gorina-Careta N, Ribas-Prats T, Arenillas-Alcón S, Puertollano M, Gómez-Roig MD, Escera C. Neonatal Frequency-Following Responses: A Methodological Framework for Clinical Applications. Semin Hear 2022; 43:162-176. [PMID: 36313048 PMCID: PMC9605802 DOI: 10.1055/s-0042-1756162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds is a noninvasive scalp-recorded auditory evoked potential that reflects synchronous phase-locked neural activity to the spectrotemporal components of the acoustic signal along the ascending auditory hierarchy. The FFR has gained recent interest in the fields of audiology and auditory cognitive neuroscience, as it has great potential to answer both basic and applied questions about processes involved in sound encoding, language development, and communication. Specifically, it has become a promising tool in neonates, as its study may allow both early identification of future language disorders and the opportunity to leverage brain plasticity during the first 2 years of life, as well as enable early interventions to prevent and/or ameliorate sound and language encoding disorders. Throughout the present review, we summarize the state of the art of the neonatal FFR and, based on our own extensive experience, present methodological approaches to record it in a clinical environment. Overall, the present review is the first one that comprehensively focuses on the neonatal FFRs applications, thus supporting the feasibility to record the FFR during the first days of life and the predictive potential of the neonatal FFR on detecting short- and long-term language abilities and disruptions.
Collapse
Affiliation(s)
- Natàlia Gorina-Careta
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
- BCNatal - Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Catalonia, Spain.
| | - Teresa Ribas-Prats
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
| | - Sonia Arenillas-Alcón
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
| | - Marta Puertollano
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
| | - M Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
- BCNatal - Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Catalonia, Spain.
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain
- Institute of Neurosciences, University of Barcelona, Catalonia, Spain
- Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Catalonia, Spain
| |
Collapse
|
37
|
Lee CY, Zhang C, Wang WSY, Waye MMY. Editorial: Relationship of language and music, ten years after: Neural organization, cross-domain transfer and evolutionary origins. Front Psychol 2022; 13:990857. [PMID: 35967615 PMCID: PMC9371976 DOI: 10.3389/fpsyg.2022.990857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Accepted: 07/12/2022] [Indexed: 12/02/2022] Open
Affiliation(s)
- Chao-Yang Lee
- Division of Communication Sciences and Disorders, Ohio University, Athens, OH, United States
- *Correspondence: Chao-Yang Lee
| | - Caicai Zhang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - William Shi-Yuan Wang
- Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Kowloon, Hong Kong SAR, China
| | - Mary Miu Yee Waye
- The Nethersole School of Nursing, The Chinese University of Hong Kong, Shatin, Hong Kong SAR, China
| |
Collapse
|
38
|
Kegler M, Weissbart H, Reichenbach T. The neural response at the fundamental frequency of speech is modulated by word-level acoustic and linguistic information. Front Neurosci 2022; 16:915744. [PMID: 35942153 PMCID: PMC9355803 DOI: 10.3389/fnins.2022.915744] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Spoken language comprehension requires rapid and continuous integration of information, from lower-level acoustic to higher-level linguistic features. Much of this processing occurs in the cerebral cortex. Its neural activity exhibits, for instance, correlates of predictive processing, emerging at delays of a few 100 ms. However, the auditory pathways are also characterized by extensive feedback loops from higher-level cortical areas to lower-level ones as well as to subcortical structures. Early neural activity can therefore be influenced by higher-level cognitive processes, but it remains unclear whether such feedback contributes to linguistic processing. Here, we investigated early speech-evoked neural activity that emerges at the fundamental frequency. We analyzed EEG recordings obtained when subjects listened to a story read by a single speaker. We identified a response tracking the speaker's fundamental frequency that occurred at a delay of 11 ms, while another response elicited by the high-frequency modulation of the envelope of higher harmonics exhibited a larger magnitude and longer latency of about 18 ms with an additional significant component at around 40 ms. Notably, while the earlier components of the response likely originate from the subcortical structures, the latter presumably involves contributions from cortical regions. Subsequently, we determined the magnitude of these early neural responses for each individual word in the story. We then quantified the context-independent frequency of each word and used a language model to compute context-dependent word surprisal and precision. The word surprisal represented how predictable a word is, given the previous context, and the word precision reflected the confidence about predicting the next word from the past context. We found that the word-level neural responses at the fundamental frequency were predominantly influenced by the acoustic features: the average fundamental frequency and its variability. Amongst the linguistic features, only context-independent word frequency showed a weak but significant modulation of the neural response to the high-frequency envelope modulation. Our results show that the early neural response at the fundamental frequency is already influenced by acoustic as well as linguistic information, suggesting top-down modulation of this neural response.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Tobias Reichenbach
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- *Correspondence: Tobias Reichenbach
| |
Collapse
|
39
|
Teichert T, Gnanateja GN, Sadagopan S, Chandrasekaran B. A Linear Superposition Model of Envelope and Frequency Following Responses May Help Identify Generators Based on Latency. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:441-468. [PMID: 36909931 PMCID: PMC10003646 DOI: 10.1162/nol_a_00072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
Envelope and frequency-following responses (FFRENV and FFRTFS) are scalp-recorded electrophysiological potentials that closely follow the periodicity of complex sounds such as speech. These signals have been established as important biomarkers in speech and learning disorders. However, despite important advances, it has remained challenging to map altered FFRENV and FFRTFS to altered processing in specific brain regions. Here we explore the utility of a deconvolution approach based on the assumption that FFRENV and FFRTFS reflect the linear superposition of responses that are triggered by the glottal pulse in each cycle of the fundamental frequency (F0 responses). We tested the deconvolution method by applying it to FFRENV and FFRTFS of rhesus monkeys to human speech and click trains with time-varying pitch patterns. Our analyses show that F0ENV responses could be measured with high signal-to-noise ratio and featured several spectro-temporally and topographically distinct components that likely reflect the activation of brainstem (<5 ms; 200-1000 Hz), midbrain (5-15 ms; 100-250 Hz), and cortex (15-35 ms; ~90 Hz). In contrast, F0TFS responses contained only one spectro-temporal component that likely reflected activity in the midbrain. In summary, our results support the notion that the latency of F0 components map meaningfully onto successive processing stages. This opens the possibility that pathologically altered FFRENV or FFRTFS may be linked to altered F0ENV or F0TFS and from there to specific processing stages and ultimately spatially targeted interventions.
Collapse
Affiliation(s)
- Tobias Teichert
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
| | - G. Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
| | - Srivatsun Sadagopan
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA, USA
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
40
|
Llanos F, Nike Gnanateja G, Chandrasekaran B. Principal component decomposition of acoustic and neural representations of time-varying pitch reveals adaptive efficient coding of speech covariation patterns. BRAIN AND LANGUAGE 2022; 230:105122. [PMID: 35460953 PMCID: PMC9934908 DOI: 10.1016/j.bandl.2022.105122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 03/30/2022] [Accepted: 04/02/2022] [Indexed: 06/14/2023]
Abstract
Understanding the effects of statistical regularities on speech processing is a central issue in auditory neuroscience. To investigate the effects of distributional covariance on the neural processing of speech features, we introduce and validate a novel approach: decomposition of time-varying signals into patterns of covariation extracted with Principal Component Analysis. We used this decomposition to assay the sensory representation of pitch covariation patterns in native Chinese listeners and non-native learners of Mandarin Chinese tones. Sensory representations were examined using the frequency-following response, a far-field potential that reflects phase-locked activity from neural ensembles along the auditory pathway. We found a more efficient representation of the covariation patterns that accounted for more redundancy in the form of distributional covariance. Notably, long-term language and short-term training experiences enhanced the sensory representation of these covariation patterns.
Collapse
Affiliation(s)
- Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, TX 78712, USA.
| | - G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA 15260, USA
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| |
Collapse
|
41
|
Zendel BR. The importance of the motor system in the development of music-based forms of auditory rehabilitation. Ann N Y Acad Sci 2022; 1515:10-19. [PMID: 35648040 DOI: 10.1111/nyas.14810] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
Hearing abilities decline with age, and one of the most commonly reported hearing issues in older adults is a difficulty understanding speech when there is loud background noise. Understanding speech in noise relies on numerous cognitive processes, including working memory, and is supported by numerous brain regions, including the motor and motor planning systems. Indeed, many working memory processes are supported by motor and premotor cortical regions. Interestingly, lifelong musicians and nonmusicians given music training over the course of weeks or months show an improved ability to understand speech when there is loud background noise. These benefits are associated with enhanced working memory abilities, and enhanced activity in motor and premotor cortical regions. Accordingly, it is likely that music training improves the coupling between the auditory and motor systems and promotes plasticity in these regions and regions that feed into auditory/motor areas. This leads to an enhanced ability to dynamically process incoming acoustic information, and is likely the reason that musicians and those who receive laboratory-based music training are better able to understand speech when there is background noise. Critically, these findings suggest that music-based forms of auditory rehabilitation are possible and should focus on tasks that promote auditory-motor interactions.
Collapse
Affiliation(s)
- Benjamin Rich Zendel
- Faculty of Medicine, Memorial University of Newfoundland, St. John's, Newfoundland and Labrador, Canada.,Aging Research Centre - Newfoundland and Labrador, Grenfell Campus, Memorial University, Corner Brook, Newfoundland and Labrador, Canada
| |
Collapse
|
42
|
Lai J, Dowling M, Bartlett EL. Comparison of age-related declines in behavioral auditory responses versus electrophysiological measures of amplitude modulation. Neurobiol Aging 2022; 117:201-211. [DOI: 10.1016/j.neurobiolaging.2022.06.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2021] [Revised: 05/31/2022] [Accepted: 06/01/2022] [Indexed: 10/18/2022]
|
43
|
Liu D, Hu J, Wang S, Fu X, Wang Y, Pugh E, Henderson Sabes J, Wang S. Aging Affects Subcortical Pitch Information Encoding Differently in Humans With Different Language Backgrounds. Front Aging Neurosci 2022; 14:816100. [PMID: 35493942 PMCID: PMC9043765 DOI: 10.3389/fnagi.2022.816100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 03/16/2022] [Indexed: 11/13/2022] Open
Abstract
Aging and language background have been shown to affect pitch information encoding at the subcortical level. To study the individual and compounded effects on subcortical pitch information encoding, Frequency Following Responses were recorded from subjects across various ages and language backgrounds. Differences were found in pitch information encoding strength and accuracy among the groups, indicating that language experience and aging affect accuracy and magnitude of pitch information encoding ability at the subcortical level. Moreover, stronger effects of aging were seen in the magnitude of phase-locking in the native language speaker groups, while language background appears to have more impact on the accuracy of pitch tracking in older adult groups.
Collapse
Affiliation(s)
- Dongxin Liu
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jiong Hu
- Department of Audiology, University of the Pacific, San Francisco, CA, United States
| | - Songjian Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Xinxing Fu
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Yuan Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Esther Pugh
- Department of Otolaryngology, Keck School of Medicine of USC, Los Angeles, CA, United States
| | | | - Shuo Wang
- Key Laboratory of Otolaryngology Head and Neck Surgery, Beijing Institute of Otolaryngology, Otolaryngology—Head and Neck Surgery, Ministry of Education, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
44
|
Bsharat-Maalouf D, Karawani H. Bilinguals' speech perception in noise: Perceptual and neural associations. PLoS One 2022; 17:e0264282. [PMID: 35196339 PMCID: PMC8865662 DOI: 10.1371/journal.pone.0264282] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/07/2022] [Indexed: 01/26/2023] Open
Abstract
The current study characterized subcortical speech sound processing among monolinguals and bilinguals in quiet and challenging listening conditions and examined the relation between subcortical neural processing and perceptual performance. A total of 59 normal-hearing adults, ages 19–35 years, participated in the study: 29 native Hebrew-speaking monolinguals and 30 Arabic-Hebrew-speaking bilinguals. Auditory brainstem responses to speech sounds were collected in a quiet condition and with background noise. The perception of words and sentences in quiet and background noise conditions was also examined to assess perceptual performance and to evaluate the perceptual-physiological relationship. Perceptual performance was tested among bilinguals in both languages (first language (L1-Arabic) and second language (L2-Hebrew)). The outcomes were similar between monolingual and bilingual groups in quiet. Noise, as expected, resulted in deterioration in perceptual and neural responses, which was reflected in lower accuracy in perceptual tasks compared to quiet, and in more prolonged latencies and diminished neural responses. However, a mixed picture was observed among bilinguals in perceptual and physiological outcomes in noise. In the perceptual measures, bilinguals were significantly less accurate than their monolingual counterparts. However, in neural responses, bilinguals demonstrated earlier peak latencies compared to monolinguals. Our results also showed that perceptual performance in noise was related to subcortical resilience to the disruption caused by background noise. Specifically, in noise, increased brainstem resistance (i.e., fewer changes in the fundamental frequency (F0) representations or fewer shifts in the neural timing) was related to better speech perception among bilinguals. Better perception in L1 in noise was correlated with fewer changes in F0 representations, and more accurate perception in L2 was related to minor shifts in auditory neural timing. This study delves into the importance of using neural brainstem responses to speech sounds to differentiate individuals with different language histories and to explain inter-subject variability in bilinguals’ perceptual abilities in daily life situations.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
- * E-mail:
| |
Collapse
|
45
|
Cheng FY, Xu C, Gold L, Smith S. Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech. Front Neurosci 2022; 15:747303. [PMID: 34987356 PMCID: PMC8721138 DOI: 10.3389/fnins.2021.747303] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 12/02/2021] [Indexed: 01/15/2023] Open
Abstract
The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWSbefore and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.
Collapse
Affiliation(s)
- Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Lisa Gold
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
46
|
JAROLLAHI F, VALADBEIGI A, JALAEI B, MAAREFVAND M, MOTASADDI ZARANDY M, HAGHANI H, SHIRZHIYZN Z. Comparing Sound-Field Speech-Auditory Brainstem Response Components between Cochlear Implant Users with Different Speech Recognition in Noise Scores. IRANIAN JOURNAL OF CHILD NEUROLOGY 2022; 16:93-105. [PMID: 35497112 PMCID: PMC9047831 DOI: 10.22037/ijcn.v16i2.27210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Accepted: 02/28/2021] [Indexed: 12/02/2022]
Abstract
OBJECTIVES Many studies have suggested that cochlear implant (CI) users vary in terms of speech recognition in noise. Studies in this field attribute this variety partly to subcortical auditory processing. Studying speech-Auditory Brainstem Response (speech-ABR) provides good information about speech processing; thus, this work was designed to compare speech-ABR components between two groups of CI users with good and poor speech recognition in noise scores. MATERIALS & METHODS The present study was conducted on two groups of CI users aged 8-10 years old. The first group (CI-good) consisted of 15 children with prelingual CI who had good speech recognition in noise performance. The second group (CI-poor) was matched with the first group, but they had poor speech recognition in noise performance. The speech-ABR test in a sound-field presentation was performed for all the participants. RESULTS The speech-ABR response showed more delay in C, D, E, F, O latencies in CI-poor than CI-good users (P <0.05), meanwhile no significant difference was observed in initial wave (V(t= -0.293, p= 0.771 and A (t= -1.051, p= 0.307). Analysis in spectral-domain showed a weaker representation of fundamental frequency as well as the first formant and high-frequency component of speech stimuli in the CI users with poor auditory performance. CONCLUSIONS Results revealed that CI users who showed poor auditory performance in noise performance had deficits in encoding the periodic portion of speech signals at the brainstem level. Also, this study could be as physiological evidence for poorer pitch processing in CI users with poor speech recognition in noise performance.
Collapse
Affiliation(s)
- Farnoush JAROLLAHI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Ayub VALADBEIGI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Bahram JALAEI
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Mohammad MAAREFVAND
- Department of Audiology, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| | - Masoud MOTASADDI ZARANDY
- Cochlear Implant Center and Department of Otorhinolaryngology, Amir Aalam Hospital, Tehran University of Medical Sciences, Tehran, Iran
| | - Hamid HAGHANI
- Department of Biostatistics, School of Public Health, Iran University of Medical Sciences, Tehran, Iran
| | - Zahra SHIRZHIYZN
- Department of Medical Physics and Biomedical Engineering, School of Medicine, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
47
|
Reybrouck M, Podlipniak P, Welch D. Music Listening and Homeostatic Regulation: Surviving and Flourishing in a Sonic World. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2021; 19:278. [PMID: 35010538 PMCID: PMC8751057 DOI: 10.3390/ijerph19010278] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/02/2021] [Revised: 11/10/2021] [Accepted: 12/20/2021] [Indexed: 01/01/2023]
Abstract
This paper argues for a biological conception of music listening as an evolutionary achievement that is related to a long history of cognitive and affective-emotional functions, which are grounded in basic homeostatic regulation. Starting from the three levels of description, the acoustic description of sounds, the neurological level of processing, and the psychological correlates of neural stimulation, it conceives of listeners as open systems that are in continuous interaction with the sonic world. By monitoring and altering their current state, they can try to stay within the limits of operating set points in the pursuit of a controlled state of dynamic equilibrium, which is fueled by interoceptive and exteroceptive sources of information. Listening, in this homeostatic view, can be adaptive and goal-directed with the aim of maintaining the internal physiology and directing behavior towards conditions that make it possible to thrive by seeking out stimuli that are valued as beneficial and worthy, or by attempting to avoid those that are annoying and harmful. This calls forth the mechanisms of pleasure and reward, the distinction between pleasure and enjoyment, the twin notions of valence and arousal, the affect-related consequences of music listening, the role of affective regulation and visceral reactions to the sounds, and the distinction between adaptive and maladaptive listening.
Collapse
Affiliation(s)
- Mark Reybrouck
- Faculty of Arts, University of Leuven, 3000 Leuven, Belgium
- Department of Art History, Musicology and Theater Studies, IPEM Institute for Psychoacoustics and Electronic Music, 9000 Ghent, Belgium
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, 61-712 Poznan, Poland;
| | - David Welch
- Institute Audiology Section, School of Population Health, University of Auckland, Auckland 2011, New Zealand;
| |
Collapse
|
48
|
Krizman J, Tierney A, Nicol T, Kraus N. Listening in the Moment: How Bilingualism Interacts With Task Demands to Shape Active Listening. Front Neurosci 2021; 15:717572. [PMID: 34955707 PMCID: PMC8702653 DOI: 10.3389/fnins.2021.717572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/25/2023] Open
Abstract
While there is evidence for bilingual enhancements of inhibitory control and auditory processing, two processes that are fundamental to daily communication, it is not known how bilinguals utilize these cognitive and sensory enhancements during real-world listening. To test our hypothesis that bilinguals engage their enhanced cognitive and sensory processing in real-world listening situations, bilinguals and monolinguals performed a selective attention task involving competing talkers, a common demand of everyday listening, and then later passively listened to the same competing sentences. During the active and passive listening periods, evoked responses to the competing talkers were collected to understand how online auditory processing facilitates active listening and if this processing differs between bilinguals and monolinguals. Additionally, participants were tested on a separate measure of inhibitory control to see if inhibitory control abilities related with performance on the selective attention task. We found that although monolinguals and bilinguals performed similarly on the selective attention task, the groups differed in the neural and cognitive processes engaged to perform this task, compared to when they were passively listening to the talkers. Specifically, during active listening monolinguals had enhanced cortical phase consistency while bilinguals demonstrated enhanced subcortical phase consistency in the response to the pitch contours of the sentences, particularly during passive listening. Moreover, bilinguals’ performance on the inhibitory control test related with performance on the selective attention test, a relationship that was not seen for monolinguals. These results are consistent with the hypothesis that bilinguals utilize inhibitory control and enhanced subcortical auditory processing in everyday listening situations to engage with sound in ways that are different than monolinguals.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Adam Tierney
- The ALPHALAB, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Departments of Neurobiology and Otolaryngology, Northwestern University, Evanston, IL, United States
- *Correspondence: Nina Kraus,
| |
Collapse
|
49
|
Krizman J, Rotondo EK, Nicol T, Kraus N, Bieszczad KM. Sex differences in auditory processing vary across estrous cycle. Sci Rep 2021; 11:22898. [PMID: 34819558 PMCID: PMC8613396 DOI: 10.1038/s41598-021-02272-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Accepted: 10/27/2021] [Indexed: 11/16/2022] Open
Abstract
In humans, females process a sound's harmonics more robustly than males. As estrogen regulates auditory plasticity in a sex-specific manner in seasonally breeding animals, estrogen signaling is one hypothesized mechanism for this difference in humans. To investigate whether sex differences in harmonic encoding vary similarly across the reproductive cycle of mammals, we recorded frequency-following responses (FFRs) to a complex sound in male and female rats. Female FFRs were collected during both low and high levels of circulating estrogen during the estrous cycle. Overall, female rodents had larger harmonic encoding than male rodents, and greater harmonic strength was seen during periods of greater estrogen production in the females. These results argue that hormonal differences, specifically estrogen, underlie sex differences in harmonic encoding in rodents and suggest that a similar mechanism may underlie differences seen in humans.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, 60208, USA
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Elena K Rotondo
- Department of Psychology-Behavioral and Systems Neuroscience, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, 60208, USA
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL, 60208, USA.
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA.
- Department of Neurobiology, Northwestern University, Evanston, IL, 60208, USA.
- Department of Otolaryngology, Northwestern University, Chicago, IL, 60611, USA.
| | - Kasia M Bieszczad
- Department of Psychology-Behavioral and Systems Neuroscience, Rutgers, The State University of New Jersey, Piscataway, NJ, 08854, USA
| |
Collapse
|
50
|
Perugia E, BinKhamis G, Schlittenlacher J, Kluk K. On prediction of aided behavioural measures using speech auditory brainstem responses and decision trees. PLoS One 2021; 16:e0260090. [PMID: 34784399 PMCID: PMC8594837 DOI: 10.1371/journal.pone.0260090] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Accepted: 11/03/2021] [Indexed: 11/30/2022] Open
Abstract
Current clinical strategies to assess benefits from hearing aids (HAs) are based on self-reported questionnaires and speech-in-noise (SIN) tests; which require behavioural cooperation. Instead, objective measures based on Auditory Brainstem Responses (ABRs) to speech stimuli would not require the individuals' cooperation. Here, we re-analysed an existing dataset to predict behavioural measures with speech-ABRs using regression trees. Ninety-two HA users completed a self-reported questionnaire (SSQ-Speech) and performed two aided SIN tests: sentences in noise (BKB-SIN) and vowel-consonant-vowels (VCV) in noise. Speech-ABRs were evoked by a 40 ms [da] and recorded in 2x2 conditions: aided vs. unaided and quiet vs. background noise. For each recording condition, two sets of features were extracted: 1) amplitudes and latencies of speech-ABR peaks, 2) amplitudes and latencies of speech-ABR F0 encoding. Two regression trees were fitted for each of the three behavioural measures with either feature set and age, digit-span forward and backward, and pure tone average (PTA) as possible predictors. The PTA was the only predictor in the SSQ-Speech trees. In the BKB-SIN trees, performance was predicted by the aided latency of peak F in quiet for participants with PTAs between 43 and 61 dB HL. In the VCV trees, performance was predicted by the aided F0 encoding latency and the aided amplitude of peak VA in quiet for participants with PTAs ≤ 47 dB HL. These findings indicate that PTA was more informative than any speech-ABR measure, as these were relevant only for a subset of the participants. Therefore, speech-ABRs evoked by a 40 ms [da] are not a clinical predictor of behavioural measures in HA users.
Collapse
Affiliation(s)
- Emanuele Perugia
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Ghada BinKhamis
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
- Department of Communication and Swallowing Disorders, Rehabilitation Hospital, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Josef Schlittenlacher
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Karolina Kluk
- Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|