1
|
Croom K, Rumschlag JA, Molinaro G, Erickson MA, Binder DK, Huber KM, Razak KA. Developmental trajectory and sex differences in auditory processing in a PTEN-deletion model of autism spectrum disorders. Neurobiol Dis 2024; 200:106628. [PMID: 39111703 DOI: 10.1016/j.nbd.2024.106628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2024] [Revised: 07/31/2024] [Accepted: 08/02/2024] [Indexed: 08/16/2024] Open
Abstract
Autism Spectrum Disorders (ASD) encompass a wide array of debilitating symptoms, including severe sensory deficits and abnormal language development. Sensory deficits early in development may lead to broader symptomatology in adolescents and adults. The mechanistic links between ASD risk genes, sensory processing and language impairment are unclear. There is also a sex bias in ASD diagnosis and symptomatology. The current study aims to identify the developmental trajectory and genotype- and sex-dependent differences in auditory sensitivity and temporal processing in a Pten-deletion (phosphatase and tensin homolog missing on chromosome 10) mouse model of ASD. Auditory temporal processing is crucial for speech recognition and language development and deficits will cause language impairments. However, very little is known about the development of temporal processing in ASD animal models, and if there are sex differences. To address this major gap, we recorded epidural electroencephalography (EEG) signals from the frontal (FC) and auditory (AC) cortex in developing and adult Nse-cre PTEN mice, in which Pten is deleted in specific cortical layers (layers III-V) (PTEN conditional knock-out (cKO). We quantified resting EEG spectral power distribution, auditory event related potentials (ERP) and temporal processing from awake and freely moving male and female mice. Temporal processing is measured using a gap-in-noise-ASSR (auditory steady state response) stimulus paradigm. The experimental manipulation of gap duration and modulation depth allows us to measure cortical entrainment to rapid gaps in sounds. Temporal processing was quantified using inter-trial phase clustering (ITPC) values that account for phase consistency across trials. The results show genotype differences in resting power distribution in PTEN cKO mice throughout development. Male and female cKO mice have significantly increased beta power but decreased high frequency oscillations in the AC and FC. Both male and female PTEN cKO mice show diminished ITPC in their gap-ASSR responses in the AC and FC compared to control mice. Overall, deficits become more prominent in adult (p60) mice, with cKO mice having significantly increased sound evoked power and decreased ITPC compared to controls. While both male and female cKO mice demonstrated severe temporal processing deficits across development, female cKO mice showed increased hypersensitivity compared to males, reflected as increased N1 and P2 amplitudes. These data identify a number of novel sensory processing deficits in a PTEN-ASD mouse model that are present from an early age. Abnormal temporal processing and hypersensitive responses may contribute to abnormal development of language function in ASD.
Collapse
Affiliation(s)
- Katilynne Croom
- Graduate Neuroscience Program, University of California, Riverside, United States of America
| | - Jeffrey A Rumschlag
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, United States of America
| | - Gemma Molinaro
- Department of Neuroscience, O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, United States of America
| | - Michael A Erickson
- Psychology Department, University of California, Riverside, United States of America
| | - Devin K Binder
- Graduate Neuroscience Program, University of California, Riverside, United States of America; Biomedical Sciences, School of Medicine, University of California, Riverside, United States of America
| | - Kimberly M Huber
- Department of Neuroscience, O'Donnell Brain Institute, UT Southwestern Medical Center, Dallas, TX, United States of America
| | - Khaleel A Razak
- Graduate Neuroscience Program, University of California, Riverside, United States of America; Psychology Department, University of California, Riverside, United States of America.
| |
Collapse
|
2
|
Nemati S, Arjmandi M, Busby N, Bonilha L, Fridriksson J. The impact of age-related hearing loss on cognitive decline: The mediating role of brain age gap. Neuroscience 2024; 551:185-195. [PMID: 38838977 DOI: 10.1016/j.neuroscience.2024.05.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/01/2024] [Accepted: 05/03/2024] [Indexed: 06/07/2024]
Abstract
In recent years, the relationship between age-related hearing loss, cognitive decline, and the risk of dementia has garnered significant attention. The significant variability in brain health and aging among individuals of the same chronological age suggests that a measure assessing how one's brain ages may better explain hearing-cognition links. The main aim of this study was to investigate the mediating role of Brain Age Gap (BAG) in the association between hearing impairment and cognitive function. This research included 185 participants aged 20-79 years. BAG was estimated based on the difference between participant's brain age (estimated based on their structural T1-weighted MRI scans) and chronological age. Cognitive performance was assessed using the Montreal Cognitive Assessment (MoCA) test while hearing ability was measured using pure-tone thresholds (PTT) and words-in-noise (WIN) perception. Mediation analyses were used to examine the mediating role of BAG in the relationship between age-related hearing loss as well as difficulties in WIN perception and cognition. Participants with poorer hearing sensitivity and WIN perception showed lower MoCA scores, but this was an indirect effect. Participants with poorer performance on PTT and WIN tests had larger BAG (accelerated brain aging), and this was associated with poorer performance on the MoCA test. Mediation analyses showed that BAG partially mediated the relationship between age-related hearing loss and cognitive decline. This study enhances our understanding of the interplay among hearing loss, cognition, and BAG, emphasizing the potential value of incorporating brain age assessments in clinical evaluations to gain insights beyond chronological age, thus advancing strategies for preserving cognitive health in aging populations.
Collapse
Affiliation(s)
- Samaneh Nemati
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA.
| | - Meisam Arjmandi
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Natalie Busby
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| | - Leonardo Bonilha
- Department of Neurology, University of South Carolina, Columbia, SC, USA
| | - Julius Fridriksson
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, SC, USA
| |
Collapse
|
3
|
Razzaghipour A, Ashrafi M, Mohammadzadeh A. A Review of Auditory Attention: Neural Mechanisms, Theories, and Affective Disorders. Indian J Otolaryngol Head Neck Surg 2024; 76:2250-2256. [PMID: 38883545 PMCID: PMC11169100 DOI: 10.1007/s12070-023-04373-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2023] [Accepted: 11/17/2023] [Indexed: 06/18/2024] Open
Abstract
Attention is a fundamental aspect of human cognitive function and is crucial for essential activities such as learning, social interaction, and routine tasks. Notably, Auditory attention involves complex interactions and collaboration among multiple brain networks. Recognizing the impairment of auditory attention, comprehending its underlying mechanisms, and identifying the activated brain regions essential for the development of treatments and interventions for individuals facing auditory attention deficits, emphasizes the significance of investigating these matters. In the current study, we conducted a review by searching for the full text of 53 articles published related to auditory attention, mechanisms, and networks in databases like Science Direct, Google Scholar, ProQuest, and PubMed using the keywords Attention, Auditory Attention, Auditory Attention Impairment, theories of attention were investigated in the years 2000 to 2023 And focused on articles that provided discussions within this research domain. The studies have demonstrated that auditory attention exceeds being an acoustic attribute and assumes a fundamental role in complex acoustic environments, information processing, and even speech comprehension. In the context of this study, we have conducted a review and summary of the proposed theories related to attention and the brain networks involved in different forms of auditory attention. In conclusion, the integration of auditory attention assessments, behavioral observations, and an understanding of the neural mechanisms and brain regions implicated in auditory attention proves to be an effective approach for the diagnosis and treatment of attention-related disorders.
Collapse
Affiliation(s)
- Amirreza Razzaghipour
- Student Research Committee, Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Majid Ashrafi
- Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - Ali Mohammadzadeh
- Department of Audiology, Faculty of Rehabilitation, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| |
Collapse
|
4
|
Ayala SA, Eads A, Kabakoff H, Swartz MT, Shiller DM, Hill J, Hitchcock ER, Preston JL, McAllister T. Auditory and Somatosensory Development for Speech in Later Childhood. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1252-1273. [PMID: 36930986 PMCID: PMC10187971 DOI: 10.1044/2022_jslhr-22-00496] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/25/2022] [Revised: 11/29/2022] [Accepted: 12/30/2022] [Indexed: 05/18/2023]
Abstract
PURPOSE This study collected measures of auditory-perceptual and oral somatosensory acuity in typically developing children and adolescents aged 9-15 years. We aimed to establish reference data that can be used as a point of comparison for individuals with residual speech sound disorder (RSSD), especially for RSSD affecting American English rhotics. We examined concurrent validity between tasks and hypothesized that performance on at least some tasks would show a significant association with age, reflecting ongoing refinement of sensory function in later childhood. We also tested for an inverse relationship between performance on auditory and somatosensory tasks, which would support the hypothesis of a trade-off between sensory domains. METHOD Ninety-eight children completed three auditory-perceptual tasks (identification and discrimination of stimuli from a "rake"-"wake" continuum and category goodness judgment for naturally produced words containing rhotics) and three oral somatosensory tasks (bite block with auditory masking, oral stereognosis, and articulatory awareness, which involved explicit judgments of relative tongue position for different speech sounds). Pairwise associations were examined between tasks within each domain and between task performance and age. Composite measures of auditory-perceptual and somatosensory functions were used to investigate the possibility of a sensory trade-off. RESULTS Statistically significant associations were observed between the identification and discrimination tasks and the bite block and articulatory awareness tasks. In addition, significant associations with age were found for the category goodness and bite block tasks. There was no statistically significant evidence of a trade-off between auditory-perceptual and somatosensory domains. CONCLUSIONS This study provided a multidimensional characterization of speech-related sensory function in older children/adolescents. Complete materials to administer all experimental tasks have been shared, along with measures of central tendency and dispersion for scores in two subgroups of age. Ultimately, we hope to apply this information to make customized treatment recommendations for children with RSSD based on sensory profiles.
Collapse
Affiliation(s)
- Samantha A. Ayala
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Amanda Eads
- Department of Communicative Sciences and Disorders, New York University, NY
| | - Heather Kabakoff
- Department of Neurology, New York University Grossman School of Medicine, NY
| | - Michelle T. Swartz
- Department of Speech-Language Pathology, Thomas Jefferson University, Philadelphia, PA
| | - Douglas M. Shiller
- École d'orthophonie et d'audiologie, Faculté de medicine, Université de Montréal, Québec, Canada
| | - Jennifer Hill
- Center for Practice and Research at the Intersection of Information, Society, and Methodology, New York University, NY
| | - Elaine R. Hitchcock
- Department of Communication Sciences and Disorders, Montclair State University, NJ
| | | | - Tara McAllister
- Department of Communicative Sciences and Disorders, New York University, NY
| |
Collapse
|
5
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
6
|
Francis AL. Adding noise is a confounded nuisance. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1375. [PMID: 36182286 DOI: 10.1121/10.0013874] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Accepted: 08/15/2022] [Indexed: 06/16/2023]
Abstract
A wide variety of research and clinical assessments involve presenting speech stimuli in the presence of some kind of noise. Here, I selectively review two theoretical perspectives and discuss ways in which these perspectives may help researchers understand the consequences for listeners of adding noise to a speech signal. I argue that adding noise changes more about the listening task than merely making the signal more difficult to perceive. To fully understand the effects of an added noise on speech perception, we must consider not just how much the noise affects task difficulty, but also how it affects all of the systems involved in understanding speech: increasing message uncertainty, modifying attentional demand, altering affective response, and changing motivation to perform the task.
Collapse
Affiliation(s)
- Alexander L Francis
- Department of Speech, Language, and Hearing Sciences, Purdue University, 715 Clinic Drive, West Lafayette, Indiana 47907, USA
| |
Collapse
|
7
|
Borirakarawin M, Punsawad Y. Event-Related Potential-Based Brain-Computer Interface Using the Thai Vowels' and Numerals' Auditory Stimulus Pattern. SENSORS (BASEL, SWITZERLAND) 2022; 22:5864. [PMID: 35957419 PMCID: PMC9371073 DOI: 10.3390/s22155864] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 08/01/2022] [Accepted: 08/04/2022] [Indexed: 06/15/2023]
Abstract
Herein, we developed an auditory stimulus pattern for an event-related potential (ERP)-based brain-computer interface (BCI) system to improve control and communication in quadriplegia with visual impairment. Auditory stimulus paradigms for multicommand electroencephalogram (EEG)-based BCIs and audio stimulus patterns were examined. With the proposed auditory stimulation, using the selected Thai vowel, similar to the English vowel, and Thai numeral sounds, as simple target recognition, we explored the ERPs' response and classification efficiency from the suggested EEG channels. We also investigated the use of single and multi-loudspeakers for auditory stimuli. Four commands were created using the proposed paradigm. The experimental paradigm was designed to observe ERP responses and verify the proposed auditory stimulus pattern. The conventional classification method produced four commands using the proposed auditory stimulus pattern. The results established that the proposed auditory stimulation with 20 to 30 trials of stream stimuli could produce a prominent ERP response from Pz channels. The vowel stimuli could achieve higher accuracy than the proposed numeral stimuli for two auditory stimuli intervals (100 and 250 ms). Additionally, multi-loudspeaker patterns through vowel and numeral sound stimulation provided an accuracy greater than 85% of the average accuracy. Thus, the proposed auditory stimulation patterns can be implemented as a real-time BCI system to aid in the daily activities of quadratic patients with visual and tactile impairments. In future, practical use of the auditory ERP-based BCI system will be demonstrated and verified in an actual scenario.
Collapse
Affiliation(s)
| | - Yunyong Punsawad
- School of Informatics, Walailak University, Nakhon Si Thammarat 80160, Thailand
- Informatics Innovative Center of Excellence, Walailak University, Nakhon Si Thammarat 80160, Thailand
| |
Collapse
|
8
|
Beynon AJ, Luijten BM, Mylanus EAM. Intracorporeal Cortical Telemetry as a Step to Automatic Closed-Loop EEG-Based CI Fitting: A Proof of Concept. Audiol Res 2021; 11:691-705. [PMID: 34940020 PMCID: PMC8698912 DOI: 10.3390/audiolres11040062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 11/04/2021] [Accepted: 12/09/2021] [Indexed: 11/16/2022] Open
Abstract
Electrically evoked auditory potentials have been used to predict auditory thresholds in patients with a cochlear implant (CI). However, with exception of electrically evoked compound action potentials (eCAP), conventional extracorporeal EEG recording devices are still needed. Until now, built-in (intracorporeal) back-telemetry options are limited to eCAPs. Intracorporeal recording of auditory responses beyond the cochlea is still lacking. This study describes the feasibility of obtaining longer latency cortical responses by concatenating interleaved short recording time windows used for eCAP recordings. Extracochlear reference electrodes were dedicated to record cortical responses, while intracochlear electrodes were used for stimulation, enabling intracorporeal telemetry (i.e., without an EEG device) to assess higher cortical processing in CI recipients. Simultaneous extra- and intra-corporeal recordings showed that it is feasible to obtain intracorporeal slow vertex potentials with a CI similar to those obtained by conventional extracorporeal EEG recordings. Our data demonstrate a proof of concept of closed-loop intracorporeal auditory cortical response telemetry (ICT) with a cochlear implant device. This research breaks new ground for next generation CI devices to assess higher cortical neural processing based on acute or continuous EEG telemetry to enable individualized automatic and/or adaptive CI fitting with only a CI.
Collapse
Affiliation(s)
- Andy J. Beynon
- Vestibular & Auditory Evoked Potential Lab, Department Oto-Rhino-Laryngology, Head & Neck Surgery, 6525 EX Nijmegen, The Netherlands
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
- Correspondence:
| | - Bart M. Luijten
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| | - Emmanuel A. M. Mylanus
- Hearing & Implants, Department Oto-Rhino-Laryngology, Head & Neck Surgery, Donders Center Medical Neuroscience, 6525 EX Nijmegen, The Netherlands; (B.M.L.); (E.A.M.M.)
| |
Collapse
|