1
|
Jin J, Zheng Q, Liu H, Feng K, Bai Y, Ni G. Musical experience enhances time discrimination: Evidence from cortical responses. Ann N Y Acad Sci 2024; 1536:167-176. [PMID: 38829709 DOI: 10.1111/nyas.15153] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
Time discrimination, a critical aspect of auditory perception, is influenced by numerous factors. Previous research has suggested that musical experience can restructure the brain, thereby enhancing time discrimination. However, this phenomenon remains underexplored. In this study, we seek to elucidate the enhancing effect of musical experience on time discrimination, utilizing both behavioral and electroencephalogram methodologies. Additionally, we aim to explore, through brain connectivity analysis, the role of increased connectivity in brain regions associated with auditory perception as a potential contributory factor to time discrimination induced by musical experience. The results show that the music-experienced group demonstrated higher behavioral accuracy, shorter reaction time, and shorter P3 and mismatch response latencies as compared to the control group. Furthermore, the music-experienced group had higher connectivity in the left temporal lobe. In summary, our research underscores the positive impact of musical experience on time discrimination and suggests that enhanced connectivity in brain regions linked to auditory perception may be responsible for this enhancement.
Collapse
Affiliation(s)
- Jiaqi Jin
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Qi Zheng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Kunyun Feng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, China
- State Key Laboratory of Advanced Medical Materials and Devices, Tianjin University, Tianjin, China
| |
Collapse
|
2
|
Momtaz S, Bidelman GM. Effects of Stimulus Rate and Periodicity on Auditory Cortical Entrainment to Continuous Sounds. eNeuro 2024; 11:ENEURO.0027-23.2024. [PMID: 38253583 PMCID: PMC10913036 DOI: 10.1523/eneuro.0027-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2023] [Revised: 01/14/2024] [Accepted: 01/16/2024] [Indexed: 01/24/2024] Open
Abstract
The neural mechanisms underlying the exogenous coding and neural entrainment to repetitive auditory stimuli have seen a recent surge of interest. However, few studies have characterized how parametric changes in stimulus presentation alter entrained responses. We examined the degree to which the brain entrains to repeated speech (i.e., /ba/) and nonspeech (i.e., click) sounds using phase-locking value (PLV) analysis applied to multichannel human electroencephalogram (EEG) data. Passive cortico-acoustic tracking was investigated in N = 24 normal young adults utilizing EEG source analyses that isolated neural activity stemming from both auditory temporal cortices. We parametrically manipulated the rate and periodicity of repetitive, continuous speech and click stimuli to investigate how speed and jitter in ongoing sound streams affect oscillatory entrainment. Neuronal synchronization to speech was enhanced at 4.5 Hz (the putative universal rate of speech) and showed a differential pattern to that of clicks, particularly at higher rates. PLV to speech decreased with increasing jitter but remained superior to clicks. Surprisingly, PLV entrainment to clicks was invariant to periodicity manipulations. Our findings provide evidence that the brain's neural entrainment to complex sounds is enhanced and more sensitized when processing speech-like stimuli, even at the syllable level, relative to nonspeech sounds. The fact that this specialization is apparent even under passive listening suggests a priority of the auditory system for synchronizing to behaviorally relevant signals.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, Tennessee 38152
- Boys Town National Research Hospital, Boys Town, Nebraska 68131
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana 47408
- Program in Neuroscience, Indiana University, Bloomington, Indiana 47405
| |
Collapse
|
3
|
Zhao R, Yue T, Xu Z, Zhang Y, Wu Y, Bai Y, Ni G, Ming D. Electroencephalogram-based objective assessment of cognitive function level associated with age-related hearing loss. GeroScience 2024; 46:431-446. [PMID: 37273160 PMCID: PMC10828275 DOI: 10.1007/s11357-023-00847-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2023] [Accepted: 05/29/2023] [Indexed: 06/06/2023] Open
Abstract
Age-Related Hearing Loss (ARHL) is a common problem in aging. Numerous longitudinal cohort studies have revealed that ARHL is closely related to cognitive function, leading to a significant risk of cognitive decline and dementia. This risk gradually increases with the severity of hearing loss. We designed dual auditory Oddball and cognitive task paradigms for the ARHL subjects, then obtained the Montreal Cognitive Assessment (MoCA) scale evaluation results for all the subjects. Multi-dimensional EEG characteristics helped explore potential biomarkers to evaluate the cognitive level of the ARHL group, having a significantly lower P300 peak amplitude coupled with a prolonged latency. Moreover, visual memory, auditory memory, and logical calculation were investigated during the cognitive task paradigm. In the ARHL groups, the alpha-to-beta rhythm energy ratio in the visual and auditory memory retention period and the wavelet packet entropy value within the logical calculation period were significantly reduced. Correlation analysis between the above specificity indicators and the subjective scale results of the ARHL group revealed that the auditory P300 component characteristics could assess attention resources and information processing speed. The alpha and beta rhythm energy ratio and wavelet packet entropy can become potential indicators to determine working memory and logical cognitive computation-related cognitive ability.
Collapse
Affiliation(s)
- Ran Zhao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin, 300072, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300392, China
| | - Tao Yue
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China
| | - Yunqi Zhang
- School of Education, Tianjin University, Tianjin, China
| | - Yubo Wu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China.
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin, 300072, China.
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300392, China.
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China.
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin, 300072, China.
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300392, China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, No. 92, Weijin Road, Nankai District, Tianjin, China
- Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin, 300072, China
- Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin, 300392, China
| |
Collapse
|
4
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term neuroplasticity interact during the perceptual learning of concurrent speech. Cereb Cortex 2024; 34:bhad543. [PMID: 38212291 PMCID: PMC10839853 DOI: 10.1093/cercor/bhad543] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2023] [Revised: 12/20/2023] [Accepted: 12/21/2023] [Indexed: 01/13/2024] Open
Abstract
Plasticity from auditory experience shapes the brain's encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ~ 45 min training sessions recorded simultaneously with high-density electroencephalography (EEG). We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. Although both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings reinforce the domain-general benefits of musicianship but reveal that successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity, which first emerge at a cortical level.
Collapse
Affiliation(s)
- Jessica MacLean
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
| | - Jack Stirn
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Alexandria Sisson
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
5
|
Mittelstadt JK, Kanold PO. Orbitofrontal cortex conveys stimulus and task information to the auditory cortex. Curr Biol 2023; 33:4160-4173.e4. [PMID: 37716349 PMCID: PMC10602585 DOI: 10.1016/j.cub.2023.08.059] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Revised: 06/29/2023] [Accepted: 08/21/2023] [Indexed: 09/18/2023]
Abstract
Auditory cortical neurons modify their response profiles in response to numerous external factors. During task performance, changes in primary auditory cortex (A1) responses are thought to be driven by top-down inputs from the orbitofrontal cortex (OFC), which may lead to response modification on a trial-by-trial basis. While OFC neurons respond to auditory stimuli and project to A1, the function of OFC projections to A1 during auditory tasks is unknown. Here, we observed the activity of putative OFC terminals in A1 in mice by using in vivo two-photon calcium imaging of OFC terminals under passive conditions and during a tone detection task. We found that behavioral activity modulates but is not necessary to evoke OFC terminal responses in A1. OFC terminals in A1 form distinct populations that exclusively respond to either the tone, reward, or error. Using tones against a background of white noise, we found that OFC terminal activity was modulated by the signal-to-noise ratio (SNR) in both the passive and active conditions and that OFC terminal activity varied with SNR, and thus task difficulty in the active condition. Therefore, OFC projections in A1 are heterogeneous in their modulation of auditory encoding and likely contribute to auditory processing under various auditory conditions.
Collapse
Affiliation(s)
- Jonah K Mittelstadt
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA
| | - Patrick O Kanold
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205, USA; Solomon H. Snyder Department of Neuroscience, Johns Hopkins University, Baltimore, MD 21205, USA; Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD 21205, USA.
| |
Collapse
|
6
|
MacLean J, Stirn J, Sisson A, Bidelman GM. Short- and long-term experience-dependent neuroplasticity interact during the perceptual learning of concurrent speech. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.09.26.559640. [PMID: 37808665 PMCID: PMC10557636 DOI: 10.1101/2023.09.26.559640] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
Plasticity from auditory experiences shapes brain encoding and perception of sound. However, whether such long-term plasticity alters the trajectory of short-term plasticity during speech processing has yet to be investigated. Here, we explored the neural mechanisms and interplay between short- and long-term neuroplasticity for rapid auditory perceptual learning of concurrent speech sounds in young, normal-hearing musicians and nonmusicians. Participants learned to identify double-vowel mixtures during ∼45 minute training sessions recorded simultaneously with high-density EEG. We analyzed frequency-following responses (FFRs) and event-related potentials (ERPs) to investigate neural correlates of learning at subcortical and cortical levels, respectively. While both groups showed rapid perceptual learning, musicians showed faster behavioral decisions than nonmusicians overall. Learning-related changes were not apparent in brainstem FFRs. However, plasticity was highly evident in cortex, where ERPs revealed unique hemispheric asymmetries between groups suggestive of different neural strategies (musicians: right hemisphere bias; nonmusicians: left hemisphere). Source reconstruction and the early (150-200 ms) time course of these effects localized learning-induced cortical plasticity to auditory-sensory brain areas. Our findings confirm domain-general benefits for musicianship but reveal successful speech sound learning is driven by a critical interplay between long- and short-term mechanisms of auditory plasticity that first emerge at a cortical level.
Collapse
|
7
|
Lai J, Alain C, Bidelman GM. Cortical-brainstem interplay during speech perception in older adults with and without hearing loss. Front Neurosci 2023; 17:1075368. [PMID: 36816123 PMCID: PMC9932544 DOI: 10.3389/fnins.2023.1075368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Real time modulation of brainstem frequency-following responses (FFRs) by online changes in cortical arousal state via the corticofugal (top-down) pathway has been demonstrated previously in young adults and is more prominent in the presence of background noise. FFRs during high cortical arousal states also have a stronger relationship with speech perception. Aging is associated with increased auditory brain responses, which might reflect degraded inhibitory processing within the peripheral and ascending pathways, or changes in attentional control regulation via descending auditory pathways. Here, we tested the hypothesis that online corticofugal interplay is impacted by age-related hearing loss. Methods We measured EEG in older adults with normal-hearing (NH) and mild to moderate hearing-loss (HL) while they performed speech identification tasks in different noise backgrounds. We measured α power to index online cortical arousal states during task engagement. Subsequently, we split brainstem speech-FFRs, on a trial-by-trial basis, according to fluctuations in concomitant cortical α power into low or high α FFRs to index cortical-brainstem modulation. Results We found cortical α power was smaller in the HL than the NH group. In NH listeners, α-FFRs modulation for clear speech (i.e., without noise) also resembled that previously observed in younger adults for speech in noise. Cortical-brainstem modulation was further diminished in HL older adults in the clear condition and by noise in NH older adults. Machine learning classification showed low α FFR frequency spectra yielded higher accuracy for classifying listeners' perceptual performance in both NH and HL participants. Moreover, low α FFRs decreased with increased hearing thresholds at 0.5-2 kHz for clear speech but noise generally reduced low α FFRs in the HL group. Discussion Collectively, our study reveals cortical arousal state actively shapes brainstem speech representations and provides a potential new mechanism for older listeners' difficulties perceiving speech in cocktail party-like listening situations in the form of a miss-coordination between cortical and subcortical levels of auditory processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States,Program in Neuroscience, Indiana University, Bloomington, IN, United States,*Correspondence: Gavin M. Bidelman,
| |
Collapse
|
8
|
Johne M, Helgers SOA, Alam M, Jelinek J, Hubka P, Krauss JK, Scheper V, Kral A, Schwabe K. Processing of auditory information in forebrain regions after hearing loss in adulthood: Behavioral and electrophysiological studies in a rat model. Front Neurosci 2022; 16:966568. [DOI: 10.3389/fnins.2022.966568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2022] [Accepted: 10/20/2022] [Indexed: 11/12/2022] Open
Abstract
BackgroundHearing loss was proposed as a factor affecting development of cognitive impairment in elderly. Deficits cannot be explained primarily by dysfunctional neuronal networks within the central auditory system. We here tested the impact of hearing loss in adult rats on motor, social, and cognitive function. Furthermore, potential changes in the neuronal activity in the medial prefrontal cortex (mPFC) and the inferior colliculus (IC) were evaluated.Materials and methodsIn adult male Sprague Dawley rats hearing loss was induced under general anesthesia with intracochlear injection of neomycin. Sham-operated and naive rats served as controls. Postsurgical acoustically evoked auditory brainstem response (ABR)-measurements verified hearing loss after intracochlear neomycin-injection, respectively, intact hearing in sham-operated and naive controls. In intervals of 8 weeks and up to 12 months after surgery rats were tested for locomotor activity (open field) and coordination (Rotarod), for social interaction and preference, and for learning and memory (4-arms baited 8-arms radial maze test). In a final setting, electrophysiological recordings were performed in the mPFC and the IC.ResultsLocomotor activity did not differ between deaf and control rats, whereas motor coordination on the Rotarod was disturbed in deaf rats (P < 0.05). Learning the concept of the radial maze test was initially disturbed in deaf rats (P < 0.05), whereas retesting every 8 weeks did not show long-term memory deficits. Social interaction and preference was also not affected by hearing loss. Final electrophysiological recordings in anesthetized rats revealed reduced firing rates, enhanced irregular firing, and reduced oscillatory theta band activity (4–8 Hz) in the mPFC of deaf rats as compared to controls (P < 0.05). In the IC, reduced oscillatory theta (4–8 Hz) and gamma (30–100 Hz) band activity was found in deaf rats (P < 0.05).ConclusionMinor and transient behavioral deficits do not confirm direct impact of long-term hearing loss on cognitive function in rats. However, the altered neuronal activities in the mPFC and IC after hearing loss indicate effects on neuronal networks in and outside the central auditory system with potential consequences on cognitive function.
Collapse
|
9
|
Xiu B, Paul BT, Chen JM, Le TN, Lin VY, Dimitrijevic A. Neural responses to naturalistic audiovisual speech are related to listening demand in cochlear implant users. Front Hum Neurosci 2022; 16:1043499. [DOI: 10.3389/fnhum.2022.1043499] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2022] [Accepted: 10/21/2022] [Indexed: 11/09/2022] Open
Abstract
There is a weak relationship between clinical and self-reported speech perception outcomes in cochlear implant (CI) listeners. Such poor correspondence may be due to differences in clinical and “real-world” listening environments and stimuli. Speech in the real world is often accompanied by visual cues, background environmental noise, and is generally in a conversational context, all factors that could affect listening demand. Thus, our objectives were to determine if brain responses to naturalistic speech could index speech perception and listening demand in CI users. Accordingly, we recorded high-density electroencephalogram (EEG) while CI users listened/watched a naturalistic stimulus (i.e., the television show, “The Office”). We used continuous EEG to quantify “speech neural tracking” (i.e., TRFs, temporal response functions) to the show’s soundtrack and 8–12 Hz (alpha) brain rhythms commonly related to listening effort. Background noise at three different signal-to-noise ratios (SNRs), +5, +10, and +15 dB were presented to vary the difficulty of following the television show, mimicking a natural noisy environment. The task also included an audio-only (no video) condition. After each condition, participants subjectively rated listening demand and the degree of words and conversations they felt they understood. Fifteen CI users reported progressively higher degrees of listening demand and less words and conversation with increasing background noise. Listening demand and conversation understanding in the audio-only condition was comparable to that of the highest noise condition (+5 dB). Increasing background noise affected speech neural tracking at a group level, in addition to eliciting strong individual differences. Mixed effect modeling showed that listening demand and conversation understanding were correlated to early cortical speech tracking, such that high demand and low conversation understanding occurred with lower amplitude TRFs. In the high noise condition, greater listening demand was negatively correlated to parietal alpha power, where higher demand was related to lower alpha power. No significant correlations were observed between TRF/alpha and clinical speech perception scores. These results are similar to previous findings showing little relationship between clinical speech perception and quality-of-life in CI users. However, physiological responses to complex natural speech may provide an objective measure of aspects of quality-of-life measures like self-perceived listening demand.
Collapse
|
10
|
Price CN, Bidelman GM. Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment. Ann N Y Acad Sci 2022; 1516:114-122. [PMID: 35762658 PMCID: PMC9588638 DOI: 10.1111/nyas.14853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Mild cognitive impairment (MCI) commonly results in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musical experience might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel electroencephalograms (EEGs) in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical experience (0-21 years). Musical experience sharpened temporal precision in auditory cortical responses, suggesting that musical experience produces more efficient processing of acoustic features by counteracting age-related neural delays. Additionally, robustness of brainstem responses predicted the severity of cognitive decline, suggesting that early speech representations are sensitive to preclinical stages of cognitive impairment. Our results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.
Collapse
Affiliation(s)
- Caitlin N. Price
- Department of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
11
|
Bidelman GM, Chow R, Noly-Gandon A, Ryan JD, Bell KL, Rizzi R, Alain C. Transcranial Direct Current Stimulation Combined With Listening to Preferred Music Alters Cortical Speech Processing in Older Adults. Front Neurosci 2022; 16:884130. [PMID: 35873829 PMCID: PMC9298650 DOI: 10.3389/fnins.2022.884130] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
Emerging evidence suggests transcranial direct current stimulation (tDCS) can improve cognitive performance in older adults. Similarly, music listening may improve arousal and stimulate subsequent performance on memory-related tasks. We examined the synergistic effects of tDCS paired with music listening on auditory neurobehavioral measures to investigate causal evidence of short-term plasticity in speech processing among older adults. In a randomized sham-controlled crossover study, we measured how combined anodal tDCS over dorsolateral prefrontal cortex (DLPFC) paired with listening to autobiographically salient music alters neural speech processing in older adults compared to either music listening (sham stimulation) or tDCS alone. EEG assays included both frequency-following responses (FFRs) and auditory event-related potentials (ERPs) to trace neuromodulation-related changes at brainstem and cortical levels. Relative to music without tDCS (sham), we found tDCS alone (without music) modulates the early cortical neural encoding of speech in the time frame of ∼100-150 ms. Whereas tDCS by itself appeared to largely produce suppressive effects (i.e., reducing ERP amplitude), concurrent music with tDCS restored responses to those of the music+sham levels. However, the interpretation of this effect is somewhat ambiguous as this neural modulation could be attributable to a true effect of tDCS or presence/absence music. Still, the combined benefit of tDCS+music (above tDCS alone) was correlated with listeners' education level suggesting the benefit of neurostimulation paired with music might depend on listener demographics. tDCS changes in speech-FFRs were not observed with DLPFC stimulation. Improvements in working memory pre to post session were also associated with better speech-in-noise listening skills. Our findings provide new causal evidence that combined tDCS+music relative to tDCS-alone (i) modulates the early (100-150 ms) cortical encoding of speech and (ii) improves working memory, a cognitive skill which may indirectly bolster noise-degraded speech perception in older listeners.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States,*Correspondence: Gavin M. Bidelman,
| | - Ricky Chow
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
| | | | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada,Department of Psychiatry, University of Toronto, Toronto, ON, Canada,Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Karen L. Bell
- Department of Audiology, San José State University, San Jose, CA, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States,School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada,Institute of Medical Science, University of Toronto, Toronto, ON, Canada,Music and Health Science Research Collaboratory, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
12
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | | | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Carolyn M. McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| |
Collapse
|
13
|
Kommajosyula SP, Bartlett EL, Cai R, Ling L, Caspary DM. Corticothalamic projections deliver enhanced responses to medial geniculate body as a function of the temporal reliability of the stimulus. J Physiol 2021; 599:5465-5484. [PMID: 34783016 PMCID: PMC10630908 DOI: 10.1113/jp282321] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 01/12/2023] Open
Abstract
Ageing and challenging signal-in-noise conditions are known to engage the use of cortical resources to help maintain speech understanding. Extensive corticothalamic projections are thought to provide attentional, mnemonic and cognitive-related inputs in support of sensory inferior colliculus (IC) inputs to the medial geniculate body (MGB). Here we show that a decrease in modulation depth, a temporally less distinct periodic acoustic signal, leads to a jittered ascending temporal code, changing MGB unit responses from adapting responses to responses showing repetition enhancement, posited to aid identification of important communication and environmental sounds. Young-adult male Fischer Brown Norway rats, injected with the inhibitory opsin archaerhodopsin T (ArchT) into the primary auditory cortex (A1), were subsequently studied using optetrodes to record single-units in MGB. Decreasing the modulation depth of acoustic stimuli significantly increased repetition enhancement. Repetition enhancement was blocked by optical inactivation of corticothalamic terminals in MGB. These data support a role for corticothalamic projections in repetition enhancement, implying that predictive anticipation could be used to improve neural representation of weakly modulated sounds. KEY POINTS: In response to a less temporally distinct repeating sound with low modulation depth, medial geniculate body (MGB) single units show a switch from adaptation towards repetition enhancement. Repetition enhancement was reversed by blockade of MGB inputs from the auditory cortex. Collectively, these data argue that diminished acoustic temporal cues such as weak modulation engage cortical processes to enhance coding of those cues in auditory thalamus.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Edward L Bartlett
- Department of Biological Sciences and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Rui Cai
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Lynne Ling
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Donald M Caspary
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| |
Collapse
|
14
|
Shukla B, Bidelman GM. Enhanced brainstem phase-locking in low-level noise reveals stochastic resonance in the frequency-following response (FFR). Brain Res 2021; 1771:147643. [PMID: 34473999 PMCID: PMC8490316 DOI: 10.1016/j.brainres.2021.147643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2021] [Revised: 08/23/2021] [Accepted: 08/28/2021] [Indexed: 11/29/2022]
Abstract
In nonlinear systems, the inclusion of low-level noise can paradoxically improve signal detection, a phenomenon known as stochastic resonance (SR). SR has been observed in human hearing whereby sensory thresholds (e.g., signal detection and discrimination) are enhanced in the presence of noise. Here, we asked whether subcortical auditory processing (neural phase locking) shows evidence of SR. We recorded brainstem frequency-following-responses (FFRs) in young, normal-hearing listeners to near-electrophysiological-threshold (40 dB SPL) complex tones composed of 10 iso-amplitude harmonics of 150 Hz fundamental frequency (F0) presented concurrent with low-level noise (+20 to -20 dB SNRs). Though variable and weak across ears, some listeners showed improvement in auditory detection thresholds with subthreshold noise confirming SR psychophysically. At the neural level, low-level FFRs were initially eradicated by noise (expected masking effect) but were surprisingly reinvigorated at select masker levels (local maximum near ∼ 35 dB SPL). These data suggest brainstem phase-locking to near threshold periodic stimuli is enhanced in optimal levels of noise, the hallmark of SR. Our findings provide novel evidence for stochastic resonance in the human auditory brainstem and suggest that under some circumstances, noise can actually benefit both the behavioral and neural encoding of complex sounds.
Collapse
Affiliation(s)
- Bhanu Shukla
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
15
|
Dias JW, McClaskey CM, Harris KC. Early auditory cortical processing predicts auditory speech in noise identification and lipreading. Neuropsychologia 2021; 161:108012. [PMID: 34474065 DOI: 10.1016/j.neuropsychologia.2021.108012] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 08/20/2021] [Accepted: 08/26/2021] [Indexed: 10/20/2022]
Abstract
Individuals typically exhibit better cross-sensory perception following unisensory loss, demonstrating improved perception of information available from the remaining senses and increased cross-sensory use of neural resources. Even individuals with no sensory loss will exhibit such changes in cross-sensory processing following temporary sensory deprivation, suggesting that the brain's capacity for recruiting cross-sensory sources to compensate for degraded unisensory input is a general characteristic of the perceptual process. Many studies have investigated how auditory and visual neural structures respond to within- and cross-sensory input. However, little attention has been given to how general auditory and visual neural processing relates to within and cross-sensory perception. The current investigation examines the extent to which individual differences in general auditory neural processing accounts for variability in auditory, visual, and audiovisual speech perception in a sample of young healthy adults. Auditory neural processing was assessed using a simple click stimulus. We found that individuals with a smaller P1 peak amplitude in their auditory-evoked potential (AEP) had more difficulty identifying speech sounds in difficult listening conditions, but were better lipreaders. The results suggest that individual differences in the auditory neural processing of healthy adults can account for variability in the perception of information available from the auditory and visual modalities, similar to the cross-sensory perceptual compensation observed in individuals with sensory loss.
Collapse
Affiliation(s)
- James W Dias
- Medical University of South Carolina, United States.
| | | | | |
Collapse
|
16
|
Yue T, Chen Y, Zheng Q, Xu Z, Wang W, Ni G. Screening Tools and Assessment Methods of Cognitive Decline Associated With Age-Related Hearing Loss: A Review. Front Aging Neurosci 2021; 13:677090. [PMID: 34335227 PMCID: PMC8316923 DOI: 10.3389/fnagi.2021.677090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 06/24/2021] [Indexed: 12/13/2022] Open
Abstract
Strong links between hearing and cognitive function have been confirmed by a growing number of cross-sectional and longitudinal studies. Seniors with age-related hearing loss (ARHL) have a significantly higher cognitive impairment incidence than those with normal hearing. The correlation mechanism between ARHL and cognitive decline is not fully elucidated to date. However, auditory intervention for patients with ARHL may reduce the risk of cognitive decline, as early cognitive screening may improve related treatment strategies. Currently, clinical audiology examinations rarely include cognitive screening tests, partly due to the lack of objective quantitative indicators with high sensitivity and specificity. Questionnaires are currently widely used as a cognitive screening tool, but the subject's performance may be negatively affected by hearing loss. Numerous electroencephalogram (EEG) and magnetic resonance imaging (MRI) studies analyzed brain structure and function changes in patients with ARHL. These objective electrophysiological tools can be employed to reveal the association mechanism between auditory and cognitive functions, which may also find biological markers to be more extensively applied in assessing the progression towards cognitive decline and observing the effects of rehabilitation training for patients with ARHL. In this study, we reviewed clinical manifestations, pathological changes, and causes of ARHL and discussed their cognitive function effects. Specifically, we focused on current cognitive screening tools and assessment methods and analyzed their limitations and potential integration.
Collapse
Affiliation(s)
- Tao Yue
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin International Engineering Institute, Tianjin University, Tianjin, China
| | - Yu Chen
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
| | - Qi Zheng
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Wei Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
| | - Guangjian Ni
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| |
Collapse
|
17
|
Effects of long-term unilateral cochlear implant use on large-scale network synchronization in adolescents. Hear Res 2021; 409:108308. [PMID: 34343851 DOI: 10.1016/j.heares.2021.108308] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Revised: 06/25/2021] [Accepted: 06/29/2021] [Indexed: 11/20/2022]
Abstract
Unilateral cochlear implantation (CI) limits deafness-related changes in the auditory pathways but promotes abnormal cortical preference for the stimulated ear and leaves the opposite ear with little protection from auditory deprivation. In the present study, time-frequency analyses of event-related potentials elicited from stimuli presented to each ear were used to determine effects of unilateral CI use on cortical synchrony. CI-elicited activity in 34 adolescents (15.4±1.9 years of age) who had listened with unilateral CIs for most of their lives prior to bilateral implantation were compared to responses elicited by a 500Hz tone-burst in normal hearing peers. Phase-locking values between 4 and 60Hz were calculated for 171 pairs of 19-cephalic recording electrodes. Ear specific results were found in the normal hearing group: higher synchronization in low frequency bands (theta and alpha) from left ear stimulation in the right hemisphere and more high frequency activity (gamma band) from right ear stimulation in the left hemisphere. In the CI group, increased phase synchronization in the theta and beta frequencies with bursts of gamma activity were elicited by the experienced-right CI between frontal, temporal and parietal cortical regions in both hemispheres, consistent with increased recruitment of cortical areas involved in attention and higher-order processes, potentially to support unilateral listening. By contrast, activity was globally desynchronized in response to initial stimulation of the naïve-left ear, suggesting decoupling of these pathways from the cortical hearing network. These data reveal asymmetric auditory development promoted by unilateral CI use, resulting in an abnormally mature neural network.
Collapse
|
18
|
Momtaz S, Moncrieff D, Bidelman GM. Dichotic listening deficits in amblyaudia are characterized by aberrant neural oscillations in auditory cortex. Clin Neurophysiol 2021; 132:2152-2162. [PMID: 34284251 DOI: 10.1016/j.clinph.2021.04.022] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 04/16/2021] [Accepted: 04/29/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE Children diagnosed with auditory processing disorder (APD) show deficits in processing complex sounds that are associated with difficulties in higher-order language, learning, cognitive, and communicative functions. Amblyaudia (AMB) is a subcategory of APD characterized by abnormally large ear asymmetries in dichotic listening tasks. METHODS Here, we examined frequency-specific neural oscillations and functional connectivity via high-density electroencephalography (EEG) in children with and without AMB during passive listening of nonspeech stimuli. RESULTS Time-frequency maps of these "brain rhythms" revealed stronger phase-locked beta-gamma (~35 Hz) oscillations in AMB participants within bilateral auditory cortex for sounds presented to the right ear, suggesting a hypersynchronization and imbalance of auditory neural activity. Brain-behavior correlations revealed neural asymmetries in cortical responses predicted the larger than normal right-ear advantage seen in participants with AMB. Additionally, we found weaker functional connectivity in the AMB group from right to left auditory cortex, despite their stronger neural responses overall. CONCLUSION Our results reveal abnormally large auditory sensory encoding and an imbalance in communication between cerebral hemispheres (ipsi- to -contralateral signaling) in AMB. SIGNIFICANCE These neurophysiological changes might lead to the functionally poorer behavioral capacity to integrate information between the two ears in children with AMB.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Deborah Moncrieff
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA
| |
Collapse
|
19
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
20
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
21
|
Mahmud MS, Yeasin M, Bidelman GM. Speech categorization is better described by induced rather than evoked neural activity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:1644. [PMID: 33765780 PMCID: PMC8267855 DOI: 10.1121/10.0003572] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Categorical perception (CP) describes how the human brain categorizes speech despite inherent acoustic variability. We examined neural correlates of CP in both evoked and induced electroencephalogram (EEG) activity to evaluate which mode best describes the process of speech categorization. Listeners labeled sounds from a vowel gradient while we recorded their EEGs. Using a source reconstructed EEG, we used band-specific evoked and induced neural activity to build parameter optimized support vector machine models to assess how well listeners' speech categorization could be decoded via whole-brain and hemisphere-specific responses. We found whole-brain evoked β-band activity decoded prototypical from ambiguous speech sounds with ∼70% accuracy. However, induced γ-band oscillations showed better decoding of speech categories with ∼95% accuracy compared to evoked β-band activity (∼70% accuracy). Induced high frequency (γ-band) oscillations dominated CP decoding in the left hemisphere, whereas lower frequencies (θ-band) dominated the decoding in the right hemisphere. Moreover, feature selection identified 14 brain regions carrying induced activity and 22 regions of evoked activity that were most salient in describing category-level speech representations. Among the areas and neural regimes explored, induced γ-band modulations were most strongly associated with listeners' behavioral CP. The data suggest that the category-level organization of speech is dominated by relatively high frequency induced brain rhythms.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, Tennessee 38152, USA
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, Tennessee 38152, USA
| | - Gavin M Bidelman
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee 38152, USA
| |
Collapse
|
22
|
Carter JA, Bidelman GM. Auditory cortex is susceptible to lexical influence as revealed by informational vs. energetic masking of speech categorization. Brain Res 2021; 1759:147385. [PMID: 33631210 DOI: 10.1016/j.brainres.2021.147385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 02/15/2021] [Accepted: 02/16/2021] [Indexed: 02/02/2023]
Abstract
Speech perception requires the grouping of acoustic information into meaningful phonetic units via the process of categorical perception (CP). Environmental masking influences speech perception and CP. However, it remains unclear at which stage of processing (encoding, decision, or both) masking affects listeners' categorization of speech signals. The purpose of this study was to determine whether linguistic interference influences the early acoustic-phonetic conversion process inherent to CP. To this end, we measured source level, event related brain potentials (ERPs) from auditory cortex (AC) and inferior frontal gyrus (IFG) as listeners rapidly categorized speech sounds along a /da/ to /ga/ continuum presented in three listening conditions: quiet, and in the presence of forward (informational masker) and time-reversed (energetic masker) 2-talker babble noise. Maskers were matched in overall SNR and spectral content and thus varied only in their degree of linguistic interference (i.e., informational masking). We hypothesized a differential effect of informational versus energetic masking on behavioral and neural categorization responses, where we predicted increased activation of frontal regions when disambiguating speech from noise, especially during lexical-informational maskers. We found (1) informational masking weakens behavioral speech phoneme identification above and beyond energetic masking; (2) low-level AC activity not only codes speech categories but is susceptible to higher-order lexical interference; (3) identifying speech amidst noise recruits a cross hemispheric circuit (ACleft → IFGright) whose engagement varies according to task difficulty. These findings provide corroborating evidence for top-down influences on the early acoustic-phonetic analysis of speech through a coordinated interplay between frontotemporal brain areas.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
23
|
Campbell J, Sharma A. Frontal Cortical Modulation of Temporal Visual Cross-Modal Re-organization in Adults with Hearing Loss. Brain Sci 2020; 10:brainsci10080498. [PMID: 32751543 PMCID: PMC7465622 DOI: 10.3390/brainsci10080498] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 07/24/2020] [Accepted: 07/27/2020] [Indexed: 11/19/2022] Open
Abstract
Recent research has demonstrated frontal cortical involvement to co-occur with visual re-organization, suggestive of top-down modulation of cross-modal mechanisms. However, it is unclear whether top-down modulation of visual re-organization takes place in mild hearing loss, or is dependent upon greater degrees of hearing loss severity. Thus, the purpose of this study was to determine if frontal top-down modulation of visual cross-modal re-organization increased across hearing loss severity. We recorded visual evoked potentials (VEPs) in response to apparent motion stimuli in 17 adults with mild-moderate hearing loss using 128-channel high-density electroencephalography (EEG). Current density reconstructions (CDRs) were generated using sLORETA to visualize VEP generators in both groups. VEP latency and amplitude in frontal regions of interest (ROIs) were compared between groups and correlated with auditory behavioral measures. Activation of frontal networks in response to visual stimulation increased across mild to moderate hearing loss, with simultaneous activation of the temporal cortex. In addition, group differences in VEP latency and amplitude correlated with auditory behavioral measures. Overall, these findings support the hypothesis that frontal top-down modulation of visual cross-modal re-organization is dependent upon hearing loss severity.
Collapse
Affiliation(s)
- Julia Campbell
- Central Sensory Processes Laboratory, Department of Communication Sciences and Disorders, University of Texas at Austin, 2504 Whitis Ave a1100, Austin, TX 78712, USA;
| | - Anu Sharma
- Anu Sharma, Brain and Behavior Laboratory, Institute of Cognitive Science, Department of Speech, Language and Hearing Science, University of Colorado at Boulder, 409 UCB, 2501 Kittredge Loop Drive, Boulder, CO 80309, USA
- Correspondence:
| |
Collapse
|
24
|
Mahmud MS, Ahmed F, Al-Fahad R, Moinuddin KA, Yeasin M, Alain C, Bidelman GM. Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning. Front Neurosci 2020; 14:748. [PMID: 32765215 PMCID: PMC7378401 DOI: 10.3389/fnins.2020.00748] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 06/25/2020] [Indexed: 12/25/2022] Open
Abstract
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Faruk Ahmed
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Rakib Al-Fahad
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Kazi Ashraf Moinuddin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
25
|
Al-Fahad R, Yeasin M, Bidelman GM. Decoding of single-trial EEG reveals unique states of functional brain connectivity that drive rapid speech categorization decisions. J Neural Eng 2020; 17:016045. [PMID: 31822643 PMCID: PMC7004853 DOI: 10.1088/1741-2552/ab6040] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Categorical perception (CP) is an inherent property of speech perception. The response time (RT) of listeners' perceptual speech identification is highly sensitive to individual differences. While the neural correlates of CP have been well studied in terms of the regional contributions of the brain to behavior, functional connectivity patterns that signify individual differences in listeners' speed (RT) for speech categorization is less clear. In this study, we introduce a novel approach to address these questions. APPROACH We applied several computational approaches to the EEG, including graph mining, machine learning (i.e., support vector machine), and stability selection to investigate the unique brain states (functional neural connectivity) that predict the speed of listeners' behavioral decisions. MAIN RESULTS We infer that (i) the listeners' perceptual speed is directly related to dynamic variations in their brain connectomics, (ii) global network assortativity and efficiency distinguished fast, medium, and slow RTs, (iii) the functional network underlying speeded decisions increases in negative assortativity (i.e., became disassortative) for slower RTs, (iv) slower categorical speech decisions cause excessive use of neural resources and more aberrant information flow within the CP circuitry, (v) slower responders tended to utilize functional brain networks excessively (or inappropriately) whereas fast responders (with lower global efficiency) utilized the same neural pathways but with more restricted organization. SIGNIFICANCE Findings show that neural classifiers (SVM) coupled with stability selection correctly classify behavioral RTs from functional connectivity alone with over 92% accuracy (AUC = 0.9). Our results corroborate previous studies by supporting the engagement of similar temporal (STG), parietal, motor, and prefrontal regions in CP using an entirely data-driven approach.
Collapse
Affiliation(s)
- Rakib Al-Fahad
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, 38152 TN, USA
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, 38152 TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA
| |
Collapse
|