1
|
Matulyte G, Parciauskaite V, Bjekic J, Pipinis E, Griskova-Bulanova I. Gamma-Band Auditory Steady-State Response and Attention: A Systemic Review. Brain Sci 2024; 14:857. [PMID: 39335353 PMCID: PMC11430480 DOI: 10.3390/brainsci14090857] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 08/19/2024] [Accepted: 08/23/2024] [Indexed: 09/30/2024] Open
Abstract
Auditory steady-state response (ASSR) is the result of the brain's ability to follow and entrain its oscillatory activity to the phase and frequency of periodic auditory stimulation. Gamma-band ASSR has been increasingly investigated with intentions to apply it in neuropsychiatric disorders diagnosis as well as in brain-computer interface technologies. However, it is still debatable whether attention can influence ASSR, as the results of the attention effects of ASSR are equivocal. In our study, we aimed to systemically review all known articles related to the attentional modulation of gamma-band ASSRs. The initial literature search resulted in 1283 papers. After the removal of duplicates and ineligible articles, 49 original studies were included in the final analysis. Most analyzed studies demonstrated ASSR modulation with differing attention levels; however, studies providing mixed or non-significant results were also identified. The high versatility of methodological approaches including the utilized stimulus type and ASSR recording modality, as well as tasks employed to modulate attention, were detected and emphasized as the main causality of result inconsistencies across studies. Also, the impact of training, inter-individual variability, and time of focus was addressed.
Collapse
Affiliation(s)
- Giedre Matulyte
- Life Sciences Centre, Institute of Biosciences, Vilnius University, Sauletekio ave 7, LT-10257 Vilnius, Lithuania
| | - Vykinta Parciauskaite
- Life Sciences Centre, Institute of Biosciences, Vilnius University, Sauletekio ave 7, LT-10257 Vilnius, Lithuania
| | - Jovana Bjekic
- Human Neuroscience Group, Institute for Medical Research, University of Belgrade, Dr Subotića 4, 11000 Belgrade, Serbia
| | - Evaldas Pipinis
- Life Sciences Centre, Institute of Biosciences, Vilnius University, Sauletekio ave 7, LT-10257 Vilnius, Lithuania
| | - Inga Griskova-Bulanova
- Life Sciences Centre, Institute of Biosciences, Vilnius University, Sauletekio ave 7, LT-10257 Vilnius, Lithuania
| |
Collapse
|
2
|
Skoe E, Kraus N. Neural Delays in Processing Speech in Background Noise Minimized after Short-Term Auditory Training. BIOLOGY 2024; 13:509. [PMID: 39056702 PMCID: PMC11273880 DOI: 10.3390/biology13070509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Revised: 06/19/2024] [Accepted: 06/24/2024] [Indexed: 07/28/2024]
Abstract
Background noise disrupts the neural processing of sound, resulting in delayed and diminished far-field auditory-evoked responses. In young adults, we previously provided evidence that cognitively based short-term auditory training can ameliorate the impact of background noise on the frequency-following response (FFR), leading to greater neural synchrony to the speech fundamental frequency(F0) in noisy listening conditions. In this same dataset (55 healthy young adults), we now examine whether training-related changes extend to the latency of the FFR, with the prediction of faster neural timing after training. FFRs were measured on two days separated by ~8 weeks. FFRs were elicited by the syllable "da" presented at a signal-to-noise ratio (SNR) of +10 dB SPL relative to a background of multi-talker noise. Half of the participants participated in 20 sessions of computerized training (Listening and Communication Enhancement Program, LACE) between test sessions, while the other half served as Controls. In both groups, half of the participants were non-native speakers of English. In the Control Group, response latencies were unchanged at retest, but for the training group, response latencies were earlier. Findings suggest that auditory training can improve how the adult nervous system responds in noisy listening conditions, as demonstrated by decreased response latencies.
Collapse
Affiliation(s)
- Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Nina Kraus
- Department of Communication Sciences, Northwestern University, Evanston, IL 60208, USA;
- Cognitive Sciences, Institute for Neuroscience, Northwestern University, Evanston, IL 60208, USA
- Department of Neurobiology and Physiology, Northwestern University, Evanston, IL 60208, USA
- Department of Linguistics, Northwestern University, Evanston, IL 60208, USA
- Department of Otolaryngology, Northwestern University, Evanston, IL 60208, USA
| |
Collapse
|
3
|
Schüller A, Schilling A, Krauss P, Reichenbach T. The Early Subcortical Response at the Fundamental Frequency of Speech Is Temporally Separated from Later Cortical Contributions. J Cogn Neurosci 2024; 36:475-491. [PMID: 38165737 DOI: 10.1162/jocn_a_02103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.
Collapse
Affiliation(s)
| | | | - Patrick Krauss
- Friedrich-Alexander-Universität Erlangen-Nürnberg
- Universitätsklinikum Erlangen
| | | |
Collapse
|
4
|
Commuri V, Kulasingham JP, Simon JZ. Cortical responses time-locked to continuous speech in the high-gamma band depend on selective attention. Front Neurosci 2023; 17:1264453. [PMID: 38156264 PMCID: PMC10752935 DOI: 10.3389/fnins.2023.1264453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/21/2023] [Indexed: 12/30/2023] Open
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of ~40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
5
|
Schüller A, Schilling A, Krauss P, Rampp S, Reichenbach T. Attentional Modulation of the Cortical Contribution to the Frequency-Following Response Evoked by Continuous Speech. J Neurosci 2023; 43:7429-7440. [PMID: 37793908 PMCID: PMC10621774 DOI: 10.1523/jneurosci.1247-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/07/2023] [Accepted: 09/21/2023] [Indexed: 10/06/2023] Open
Abstract
Selective attention to one of several competing speakers is required for comprehending a target speaker among other voices and for successful communication with them. It moreover has been found to involve the neural tracking of low-frequency speech rhythms in the auditory cortex. Effects of selective attention have also been found in subcortical neural activities, in particular regarding the frequency-following response related to the fundamental frequency of speech (speech-FFR). Recent investigations have, however, shown that the speech-FFR contains cortical contributions as well. It remains unclear whether these are also modulated by selective attention. Here we used magnetoencephalography to assess the attentional modulation of the cortical contributions to the speech-FFR. We presented both male and female participants with two competing speech signals and analyzed the cortical responses during attentional switching between the two speakers. Our findings revealed robust attentional modulation of the cortical contribution to the speech-FFR: the neural responses were higher when the speaker was attended than when they were ignored. We also found that, regardless of attention, a voice with a lower fundamental frequency elicited a larger cortical contribution to the speech-FFR than a voice with a higher fundamental frequency. Our results show that the attentional modulation of the speech-FFR does not only occur subcortically but extends to the auditory cortex as well.SIGNIFICANCE STATEMENT Understanding speech in noise requires attention to a target speaker. One of the speech features that a listener can use to identify a target voice among others and attend it is the fundamental frequency, together with its higher harmonics. The fundamental frequency arises from the opening and closing of the vocal folds and is tracked by high-frequency neural activity in the auditory brainstem and in the cortex. Previous investigations showed that the subcortical neural tracking is modulated by selective attention. Here we show that attention affects the cortical tracking of the fundamental frequency as well: it is stronger when a particular voice is attended than when it is ignored.
Collapse
Affiliation(s)
- Alina Schüller
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
- Pattern Recognition Lab, Department Computer Science, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Stefan Rampp
- Department of Neurosurgery, University Hospital Erlangen, 91058 Erlangen, Germany
- Department of Neurosurgery, University Hospital Halle (Saale), 06120 Halle (Saale), Germany
- Department of Neuroradiology, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Tobias Reichenbach
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| |
Collapse
|
6
|
Commuri V, Kulasingham JP, Simon JZ. Cortical Responses Time-Locked to Continuous Speech in the High-Gamma Band Depend on Selective Attention. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.20.549567. [PMID: 37546895 PMCID: PMC10401961 DOI: 10.1101/2023.07.20.549567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of approximately 40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
7
|
Zhang JX, Brink MT, Yan Y, Goldstein-Piekarski A, Krause AJ, Manber R, Kreibig S, Gross JJ. Daytime affect and sleep EEG activity: A data-driven exploration. J Sleep Res 2023; 32:e13916. [PMID: 37156757 PMCID: PMC10524571 DOI: 10.1111/jsr.13916] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 04/11/2023] [Accepted: 04/13/2023] [Indexed: 05/10/2023]
Abstract
It has long been thought that links between affect and sleep are bidirectional. However, few studies have directly assessed the relationships between: (1) pre-sleep affect and sleep electroencephalogram (EEG) activity; and (2) sleep EEG activity and post-sleep affect. This study aims to systematically explore the correlations between pre-/post-sleep affect and EEG activity during sleep. In a community sample of adults (n = 51), we measured participants' positive and negative affect in the evening before sleep and in the next morning after sleep. Participants slept at their residence for 1 night of EEG recording. Using Fourier transforms, the EEG power at each channel was estimated during rapid eye movement sleep and non-rapid eye movement sleep for the full range of sleep EEG frequencies. We first present heatmaps of the raw correlations between pre-/post-sleep affect and EEG power during rapid eye movement and non-rapid eye movement sleep. We then thresholded the raw correlations with a medium effect size |r| ≥ 0.3. Using a cluster-based permutation test, we identified a significant cluster indicating a negative correlation between pre-sleep positive affect and EEG power in the alpha frequency range during rapid eye movement sleep. This result suggests that more positive affect during the daytime may be associated with less fragmented rapid eye movement sleep that night. Overall, our exploratory results lay the foundation for confirmatory research on the relationship between daytime affect and sleep EEG activity.
Collapse
Affiliation(s)
| | | | - Yan Yan
- Department of Psychology, Stanford University
| | - Andrea Goldstein-Piekarski
- Department of Psychiatry and Behavioral Sciences, Stanford University
- Sierra-Pacific Mental Illness Research, Education and Clinical Center, Palo Alto Veterans Affairs Hospital
| | - Adam J. Krause
- Department of Psychiatry and Behavioral Sciences, Stanford University
| | - Rachel Manber
- Department of Psychiatry and Behavioral Sciences, Stanford University
| | | | | |
Collapse
|
8
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
9
|
Ma HL, Zeng TA, Jiang L, Zhang M, Li H, Su R, Wang ZX, Chen DM, Xu M, Xie WT, Dang P, Bu XO, Zhang T, Wang TZ. Altered resting-state network connectivity patterns for predicting attentional function in deaf individuals: An EEG study. Hear Res 2023; 429:108696. [PMID: 36669260 DOI: 10.1016/j.heares.2023.108696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 12/22/2022] [Accepted: 01/12/2023] [Indexed: 01/16/2023]
Abstract
Multiple aspects of brain development are influenced by early sensory loss such as deafness. Despite growing evidence of changes in attentional functions for prelingual profoundly deaf, the brain mechanisms underlying these attentional changes remain unclear. This study investigated the relationships between differential attention and the resting-state brain network difference in deaf individuals from the perspective of brain network connectivity. We recruited 36 deaf individuals and 34 healthy controls (HC). We recorded each participant's resting-state electroencephalogram (EEG) and the event-related potential (ERP) data from the Attention Network Test (ANT). The coherence (COH) method and graph theory were used to build brain networks and analyze network connectivity. First, the ERPs of analysis in task states were investigated. Then, we correlated the topological properties of the network functional connectivity with the ERPs. The results revealed a significant correlation between frontal-occipital connection in the resting state and the amplitude of alert N1 amplitude in the alpha band. Specifically, clustering coefficients and global and local efficiency correlate negatively with alert N1 amplitude, whereas the characteristic path length positively correlates with alert N1 amplitude. In addition, deaf individuals exhibited weaker frontal-occipital connections compared to the HC group. In executive control, the deaf group had longer reaction times and larger P3 amplitudes. However, the orienting function did not significantly differ from the HC group. Finally, the alert N1 amplitude in the ANT task for deaf individuals was predicted using a multiple linear regression model based on resting-state EEG network properties. Our results suggest that deafness affects the performance of alerting and executive control while orienting functions develop similarly to hearing individuals. Furthermore, weakened frontal-occipital connections in the deaf brain are a fundamental cause of altered alerting functions in the deaf. These results reveal important effects of brain networks on attentional function from the perspective of brain connections and provide potential physiological biomarkers to predicting attention.
Collapse
Affiliation(s)
- Hai-Lin Ma
- Faculty of Education, Shaanxi Normal University, No.199, Chang'an Road, Yanta District, Xi 'an, Shaanxi 710062, China; Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Tong-Ao Zeng
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Lin Jiang
- School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Mei Zhang
- College of Special Education, Leshan Normal University, Leshan 614000, China
| | - Hao Li
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Rui Su
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Zhi-Xin Wang
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China; Department of Psychology, Shandong Normal University, No. 88East Wenhua Road, Jinan, Shandong 250014, China
| | - Dong-Mei Chen
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Meng Xu
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Wen-Ting Xie
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Peng Dang
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China
| | - Xiao-Ou Bu
- Plateau Brain Science Research Center, Tibet University /South China Normal University, 850012/Guangzhou, Lhasa 510631, China; Faculty of Education, East China Normal University, Shanghai 200062, China
| | - Tao Zhang
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China,.
| | - Ting-Zhao Wang
- Faculty of Education, Shaanxi Normal University, No.199, Chang'an Road, Yanta District, Xi 'an, Shaanxi 710062, China.
| |
Collapse
|
10
|
Mai G, Howell P. The possible role of early-stage phase-locked neural activities in speech-in-noise perception in human adults across age and hearing loss. Hear Res 2023; 427:108647. [PMID: 36436293 DOI: 10.1016/j.heares.2022.108647] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 10/26/2022] [Accepted: 11/04/2022] [Indexed: 11/11/2022]
Abstract
Ageing affects auditory neural phase-locked activities which could increase the challenges experienced during speech-in-noise (SiN) perception by older adults. However, evidence for how ageing affects SiN perception through these phase-locked activities is still lacking. It is also unclear whether influences of ageing on phase-locked activities in response to different acoustic properties have similar or different mechanisms to affect SiN perception. The present study addressed these issues by measuring early-stage phase-locked encoding of speech under quiet and noisy backgrounds (speech-shaped noise (SSN) and multi-talker babbles) in adults across a wide age range (19-75 years old). Participants passively listened to a repeated vowel whilst the frequency-following response (FFR) to fundamental frequency that has primary subcortical sources and cortical phase-locked response to slowly-fluctuating acoustic envelopes were recorded. We studied how these activities are affected by age and age-related hearing loss and how they are related to SiN performances (word recognition in sentences in noise). First, we found that the effects of age and hearing loss differ for the FFR and slow-envelope phase-locking. FFR was significantly decreased with age and high-frequency (≥ 2 kHz) hearing loss but increased with low-frequency (< 2 kHz) hearing loss, whilst the slow-envelope phase-locking was significantly increased with age and hearing loss across frequencies. Second, potential relationships between the types of phase-locked activities and SiN perception performances were also different. We found that the FFR and slow-envelope phase-locking positively corresponded to SiN performance under multi-talker babbles and SSN, respectively. Finally, we investigated how age and hearing loss affected SiN perception through phase-locked activities via mediation analyses. We showed that both types of activities significantly mediated the relation between age/hearing loss and SiN perception but in distinct manners. Specifically, FFR decreased with age and high-frequency hearing loss which in turn contributed to poorer SiN performance but increased with low-frequency hearing loss which in turn contributed to better SiN performance under multi-talker babbles. Slow-envelope phase-locking increased with age and hearing loss which in turn contributed to better SiN performance under both SSN and multi-talker babbles. Taken together, the present study provided evidence for distinct neural mechanisms of early-stage auditory phase-locked encoding of different acoustic properties through which ageing affects SiN perception.
Collapse
Affiliation(s)
- Guangting Mai
- National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK; Academic Unit of Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK; Department of Experimental Psychology, University College London, London WC1H 0AP, UK.
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
11
|
Simon JZ, Commuri V, Kulasingham JP. Time-locked auditory cortical responses in the high-gamma band: A window into primary auditory cortex. Front Neurosci 2022; 16:1075369. [PMID: 36570848 PMCID: PMC9773383 DOI: 10.3389/fnins.2022.1075369] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 11/24/2022] [Indexed: 12/13/2022] Open
Abstract
Primary auditory cortex is a critical stage in the human auditory pathway, a gateway between subcortical and higher-level cortical areas. Receiving the output of all subcortical processing, it sends its output on to higher-level cortex. Non-invasive physiological recordings of primary auditory cortex using electroencephalography (EEG) and magnetoencephalography (MEG), however, may not have sufficient specificity to separate responses generated in primary auditory cortex from those generated in underlying subcortical areas or neighboring cortical areas. This limitation is important for investigations of effects of top-down processing (e.g., selective-attention-based) on primary auditory cortex: higher-level areas are known to be strongly influenced by top-down processes, but subcortical areas are often assumed to perform strictly bottom-up processing. Fortunately, recent advances have made it easier to isolate the neural activity of primary auditory cortex from other areas. In this perspective, we focus on time-locked responses to stimulus features in the high gamma band (70-150 Hz) and with early cortical latency (∼40 ms), intermediate between subcortical and higher-level areas. We review recent findings from physiological studies employing either repeated simple sounds or continuous speech, obtaining either a frequency following response (FFR) or temporal response function (TRF). The potential roles of top-down processing are underscored, and comparisons with invasive intracranial EEG (iEEG) and animal model recordings are made. We argue that MEG studies employing continuous speech stimuli may offer particular benefits, in that only a few minutes of speech generates robust high gamma responses from bilateral primary auditory cortex, and without measurable interference from subcortical or higher-level areas.
Collapse
Affiliation(s)
- Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, College Park, MD, United States
| | - Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, College Park, MD, United States
| | | |
Collapse
|
12
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
13
|
Chapelle ADL, Savard MA, Restani R, Ghaemmaghami P, Thillou N, Zardoui K, Chandrasekaran B, Coffey EBJ. Sleep affects higher-level categorization of speech sounds, but not frequency encoding. Cortex 2022; 154:27-45. [PMID: 35732089 DOI: 10.1016/j.cortex.2022.04.018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2021] [Revised: 03/26/2022] [Accepted: 04/19/2022] [Indexed: 11/03/2022]
Abstract
Sleep can increase consolidation of new knowledge and skills. It is less clear whether sleep plays a role in other aspects of experience-dependent neuroplasticity, which underlie important human capabilities such as spoken language processing. Theories of sensory learning differ in their predictions; some imply rapid learning at early sensory levels, while other propose a slow, progressive timecourse such that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until later stages. In this study, we investigated the role of sleep across both behavioural and physiological indices of auditory neuroplasticity. Forty healthy young human adults (23 female) who did not speak a tonal language participated in the study. They learned to categorize non-native Mandarin lexical tones using a sound-to-category training paradigm, and were then randomly assigned to a Nap or Wake condition. Polysomnographic data were recorded to quantify sleep during a 3 h afternoon nap opportunity, or equivalent period of quiet wakeful activity. Measures of behavioural performance accuracy revealed a significant improvement in learning the sound-to-category training paradigm between Nap and Wake groups. Conversely, a neural index of fine sound encoding fidelity of speech sounds known as the frequency-following response (FFR) suggested no change due to sleep, and a null model was supported, using Bayesian statistics. Together, these results support theories that propose a slow, progressive and hierarchical timecourse for sensory learning. Sleep's effect may play the biggest role in the higher-level learning, although contributions to more protracted processes of plasticity that exceed the study duration cannot be ruled out.
Collapse
Affiliation(s)
- Aurélien de la Chapelle
- Lyon Neuroscience Research Centre, Lyon, France; Department of Psychology, Concordia University, Montreal, QC, Canada
| | | | - Reyan Restani
- Department of Psychology, Concordia University, Montreal, QC, Canada; Université Paris Nanterre, Paris, France
| | | | - Noam Thillou
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Khashayar Zardoui
- Department of Psychology, Concordia University, Montreal, QC, Canada
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, USA
| | - Emily B J Coffey
- Department of Psychology, Concordia University, Montreal, QC, Canada.
| |
Collapse
|
14
|
Etard O, Messaoud RB, Gaugain G, Reichenbach T. No Evidence of Attentional Modulation of the Neural Response to the Temporal Fine Structure of Continuous Musical Pieces. J Cogn Neurosci 2021; 34:411-424. [PMID: 35015867 DOI: 10.1162/jocn_a_01811] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Speech and music are spectrotemporally complex acoustic signals that are highly relevant for humans. Both contain a temporal fine structure that is encoded in the neural responses of subcortical and cortical processing centers. The subcortical response to the temporal fine structure of speech has recently been shown to be modulated by selective attention to one of two competing voices. Music similarly often consists of several simultaneous melodic lines, and a listener can selectively attend to a particular one at a time. However, the neural mechanisms that enable such selective attention remain largely enigmatic, not least since most investigations to date have focused on short and simplified musical stimuli. Here, we studied the neural encoding of classical musical pieces in human volunteers, using scalp EEG recordings. We presented volunteers with continuous musical pieces composed of one or two instruments. In the latter case, the participants were asked to selectively attend to one of the two competing instruments and to perform a vibrato identification task. We used linear encoding and decoding models to relate the recorded EEG activity to the stimulus waveform. We show that we can measure neural responses to the temporal fine structure of melodic lines played by one single instrument, at the population level as well as for most individual participants. The neural response peaks at a latency of 7.6 msec and is not measurable past 15 msec. When analyzing the neural responses to the temporal fine structure elicited by competing instruments, we found no evidence of attentional modulation. We observed, however, that low-frequency neural activity exhibited a modulation consistent with the behavioral task at latencies from 100 to 160 msec, in a similar manner to the attentional modulation observed in continuous speech (N100). Our results show that, much like speech, the temporal fine structure of music is tracked by neural activity. In contrast to speech, however, this response appears unaffected by selective attention in the context of our experiment.
Collapse
|
15
|
Krizman J, Tierney A, Nicol T, Kraus N. Listening in the Moment: How Bilingualism Interacts With Task Demands to Shape Active Listening. Front Neurosci 2021; 15:717572. [PMID: 34955707 PMCID: PMC8702653 DOI: 10.3389/fnins.2021.717572] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2021] [Accepted: 11/11/2021] [Indexed: 01/25/2023] Open
Abstract
While there is evidence for bilingual enhancements of inhibitory control and auditory processing, two processes that are fundamental to daily communication, it is not known how bilinguals utilize these cognitive and sensory enhancements during real-world listening. To test our hypothesis that bilinguals engage their enhanced cognitive and sensory processing in real-world listening situations, bilinguals and monolinguals performed a selective attention task involving competing talkers, a common demand of everyday listening, and then later passively listened to the same competing sentences. During the active and passive listening periods, evoked responses to the competing talkers were collected to understand how online auditory processing facilitates active listening and if this processing differs between bilinguals and monolinguals. Additionally, participants were tested on a separate measure of inhibitory control to see if inhibitory control abilities related with performance on the selective attention task. We found that although monolinguals and bilinguals performed similarly on the selective attention task, the groups differed in the neural and cognitive processes engaged to perform this task, compared to when they were passively listening to the talkers. Specifically, during active listening monolinguals had enhanced cortical phase consistency while bilinguals demonstrated enhanced subcortical phase consistency in the response to the pitch contours of the sentences, particularly during passive listening. Moreover, bilinguals’ performance on the inhibitory control test related with performance on the selective attention test, a relationship that was not seen for monolinguals. These results are consistent with the hypothesis that bilinguals utilize inhibitory control and enhanced subcortical auditory processing in everyday listening situations to engage with sound in ways that are different than monolinguals.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Adam Tierney
- The ALPHALAB, Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, United States
- Departments of Neurobiology and Otolaryngology, Northwestern University, Evanston, IL, United States
- *Correspondence: Nina Kraus,
| |
Collapse
|
16
|
Gnanateja GN, Rupp K, Llanos F, Remick M, Pernia M, Sadagopan S, Teichert T, Abel TJ, Chandrasekaran B. Frequency-Following Responses to Speech Sounds Are Highly Conserved across Species and Contain Cortical Contributions. eNeuro 2021; 8:ENEURO.0451-21.2021. [PMID: 34799409 PMCID: PMC8704423 DOI: 10.1523/eneuro.0451-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Kyle Rupp
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, Texas 78712
| | - Madison Remick
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Marianny Pernia
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Srivatsun Sadagopan
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Tobias Teichert
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Taylor J Abel
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| |
Collapse
|
17
|
Mai G, Howell P. Causal Relationship between the Right Auditory Cortex and Speech-Evoked Envelope-Following Response: Evidence from Combined Transcranial Stimulation and Electroencephalography. Cereb Cortex 2021; 32:1437-1454. [PMID: 34424956 PMCID: PMC8971082 DOI: 10.1093/cercor/bhab298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 11/27/2022] Open
Abstract
Speech-evoked envelope-following response (EFR) reflects brain encoding of speech periodicity that serves as a biomarker for pitch and speech perception and various auditory and language disorders. Although EFR is thought to originate from the subcortex, recent research illustrated a right-hemispheric cortical contribution to EFR. However, it is unclear whether this contribution is causal. This study aimed to establish this causality by combining transcranial direct current stimulation (tDCS) and measurement of EFR (pre- and post-tDCS) via scalp-recorded electroencephalography. We applied tDCS over the left and right auditory cortices in right-handed normal-hearing participants and examined whether altering cortical excitability via tDCS causes changes in EFR during monaural listening to speech syllables. We showed significant changes in EFR magnitude when tDCS was applied over the right auditory cortex compared with sham stimulation for the listening ear contralateral to the stimulation site. No such effect was found when tDCS was applied over the left auditory cortex. Crucially, we further observed a hemispheric laterality where aftereffect was significantly greater for tDCS applied over the right than the left auditory cortex in the contralateral ear condition. Our finding thus provides the first evidence that validates the causal relationship between the right auditory cortex and EFR.
Collapse
Affiliation(s)
- Guangting Mai
- Hearing Theme, National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK.,Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK.,Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
18
|
Lemos FA, da Silva Nunes AD, de Souza Evangelista CK, Escera C, Taveira KVM, Balen SA. Frequency-Following Response in Newborns and Infants: A Systematic Review of Acquisition Parameters. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2085-2102. [PMID: 34057846 DOI: 10.1044/2021_jslhr-20-00639] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study is to characterize parameters used for frequency-following response (FFR) acquisition in children up to 24 months of age through a systematic review. Method The study was registered in PROSPERO and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses' recommendations. Search was performed in six databases (LILACS, LIVIVO, PsycINFO, PubMed, Scopus, and Web of Science) and gray literature (Google Scholar, OpenGrey, ProQuest)as well as via manual searches in bibliographic references. Observational studies using speech stimuli to elicit the FFR in infants with normal hearing on the age range from 0 until 24 months were included. No restrictions regarding language and year of publication were applied. Risk of bias was assessed with the Joanna Briggs Institute Critical Appraisal Checklist. Data on stimulus, presentation rate, time window for analysis, number of sweeps, artifact rejection, online filters, stimulated ear, and examination condition were extracted. Results Four hundred fifty-nine studies were identified. After removing duplicates and reading titles and abstracts, 15 articles were included. Seven studies were classified as low risk of bias, seven as moderate risk, and one as high risk. Conclusions There is a consensus in the use of some acquisition parameters of the FFR with speech stimulus, such as the vertical mounting, the use of alternating polarity, a sampling rate of 20000 Hz, and the /da/ synthesized syllable of 40 ms in duration as the preferred stimulus. Although these parameters show some consensus, the results disclosed lack of a single established protocol for FFR acquisition with speech stimulus in infants in the investigated age range.
Collapse
Affiliation(s)
- Fabiana Aparecida Lemos
- Speech, Language and Hearing Sciences Graduate Program, Health Sciences Center, Federal University of Rio Grande do Norte (UFRN), Natal, Brazil
- Laboratory of Technological Innovation in Health of the Federal University of Rio Grande do Norte (LAIS/UFRN), Natal, Brazil
| | - Aryelly Dayane da Silva Nunes
- Speech, Language and Hearing Sciences Graduate Program, Health Sciences Center, Federal University of Rio Grande do Norte (UFRN), Natal, Brazil
- Laboratory of Technological Innovation in Health of the Federal University of Rio Grande do Norte (LAIS/UFRN), Natal, Brazil
| | - Carolina Karla de Souza Evangelista
- Speech, Language and Hearing Sciences Graduate Program, Health Sciences Center, Federal University of Rio Grande do Norte (UFRN), Natal, Brazil
- Laboratory of Technological Innovation in Health of the Federal University of Rio Grande do Norte (LAIS/UFRN), Natal, Brazil
| | - Carles Escera
- Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Spain
- Institute of Neurosciences, University of Barcelona, Spain
- Sant Joan de Déu Research Institute, Esplugues de Llobregat Barcelona, Spain
| | | | - Sheila Andreoli Balen
- Speech, Language and Hearing Sciences Graduate Program, Health Sciences Center, Federal University of Rio Grande do Norte (UFRN), Natal, Brazil
- Laboratory of Technological Innovation in Health of the Federal University of Rio Grande do Norte (LAIS/UFRN), Natal, Brazil
| |
Collapse
|
19
|
Coffey EBJ, Arseneau-Bruneau I, Zhang X, Baillet S, Zatorre RJ. Oscillatory Entrainment of the Frequency-following Response in Auditory Cortical and Subcortical Structures. J Neurosci 2021; 41:4073-4087. [PMID: 33731448 PMCID: PMC8176755 DOI: 10.1523/jneurosci.2313-20.2021] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 02/23/2021] [Accepted: 02/24/2021] [Indexed: 11/21/2022] Open
Abstract
There is much debate about the existence and function of neural oscillatory mechanisms in the auditory system. The frequency-following response (FFR) is an index of neural periodicity encoding that can provide a vehicle to study entrainment in frequency ranges relevant to speech and music processing. Criteria for entrainment include the presence of poststimulus oscillations and phase alignment between stimulus and endogenous activity. To test the hypothesis of entrainment, in experiment 1 we collected FFR data for a repeated syllable using magnetoencephalography (MEG) and electroencephalography in 20 male and female human adults. We observed significant oscillatory activity after stimulus offset in auditory cortex and subcortical auditory nuclei, consistent with entrainment. In these structures, the FFR fundamental frequency converged from a lower value over 100 ms to the stimulus frequency, consistent with phase alignment, and diverged to a lower value after offset, consistent with relaxation to a preferred frequency. In experiment 2, we tested how transitions between stimulus frequencies affected the MEG FFR to a train of tone pairs in 30 people. We found that the FFR was affected by the frequency of the preceding tone for up to 40 ms at subcortical levels, and even longer durations at cortical levels. Our results suggest that oscillatory entrainment may be an integral part of periodic sound representation throughout the auditory neuraxis. The functional role of this mechanism is unknown, but it could serve as a fine-scale temporal predictor for frequency information, enhancing stability and reducing susceptibility to degradation that could be useful in real-life noisy environments.SIGNIFICANCE STATEMENT Neural oscillations are proposed to be a ubiquitous aspect of neural function, but their contribution to auditory encoding is not clear, particularly at higher frequencies associated with pitch encoding. In a magnetoencephalography experiment, we found converging evidence that the frequency-following response has an oscillatory component according to established criteria: poststimulus resonance, progressive entrainment of the neural frequency to the stimulus frequency, and relaxation toward the original state on stimulus offset. In a second experiment, we found that the frequency and amplitude of the frequency-following response to tones are affected by preceding stimuli. These findings support the contribution of intrinsic oscillations to the encoding of sound, and raise new questions about their functional roles, possibly including stabilization and low-level predictive coding.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, Montreal, Quebec H4B 1R6, Canada
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
| | - Isabelle Arseneau-Bruneau
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), McGill University, Montreal, Quebec H3A 1E3, Canada
| | - Xiaochen Zhang
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai 200030, People's Republic of China
| | - Sylvain Baillet
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
| | - Robert J Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Quebec H3A 2B4, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Quebec H3C 3J7, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Quebec H3G 2A8, Canada
- Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT), McGill University, Montreal, Quebec H3A 1E3, Canada
| |
Collapse
|
20
|
Van Canneyt J, Wouters J, Francart T. Neural tracking of the fundamental frequency of the voice: The effect of voice characteristics. Eur J Neurosci 2021; 53:3640-3653. [DOI: 10.1111/ejn.15229] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 03/24/2021] [Accepted: 04/08/2021] [Indexed: 11/26/2022]
Affiliation(s)
| | - Jan Wouters
- ExpORL Department of Neurosciences KU Leuven Leuven Belgium
| | - Tom Francart
- ExpORL Department of Neurosciences KU Leuven Leuven Belgium
| |
Collapse
|
21
|
Sohn J, Jung IY, Ku Y, Kim Y. Machine-Learning-Based Rehabilitation Prognosis Prediction in Patients with Ischemic Stroke Using Brainstem Auditory Evoked Potential. Diagnostics (Basel) 2021; 11:diagnostics11040673. [PMID: 33918008 PMCID: PMC8068377 DOI: 10.3390/diagnostics11040673] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 03/29/2021] [Accepted: 04/07/2021] [Indexed: 01/17/2023] Open
Abstract
To evaluate the feasibility of brainstem auditory evoked potential (BAEP) for rehabilitation prognosis prediction in patients with ischemic stroke, 181 patients were tested using the Korean version of the modified Barthel index (K-MBI) at admission (basal K-MBI) and discharge (follow-up K-MBI). The BAEP measurements were performed within two weeks of admission on average. The criterion between favorable and unfavorable outcomes was defined as a K-MBI score of 75 at discharge, which was the boundary between moderate and mild dependence in daily living activities. The changes in the K-MBI scores (discharge-admission) were analyzed by nonlinear regression models, including the artificial neural network (ANN) and support vector machine (SVM), with the basal K-MBI score, age, and interpeak latencies (IPLs) of the BAEP (waves I, I-III, and III-V). When including the BAEP features, the correlations of the ANN and SVM regression models increased to 0.70 and 0.64, respectively. In the outcome prediction, the ANN model with the basal K-MBI score, age, and BAEP IPLs exhibited a sensitivity of 92% and specificity of 90%. Our results suggest that the BAEP IPLs used with the basal K-MBI score and age can play an adjunctive role in the prediction of patient rehabilitation prognoses.
Collapse
Affiliation(s)
- Jangjay Sohn
- Interdisciplinary Program in Bioengineering, Graduate School, Seoul National University, Seoul 03080, Korea;
| | - Il-Young Jung
- Department of Rehabilitation Medicine, Chungnam National University College of Medicine, Daejeon 35015, Korea;
| | - Yunseo Ku
- Department of Biomedical Engineering, Chungnam National University College of Medicine, Daejeon 35015, Korea
- Correspondence: (Y.K.); (Y.K.); Tel.: +82-42-280-8613 (Y.K.); +82-44-995-4760 (Y.K.)
| | - Yeongwook Kim
- Department of Rehabilitation Medicine, Chungnam National University College of Medicine, Daejeon 35015, Korea;
- Correspondence: (Y.K.); (Y.K.); Tel.: +82-42-280-8613 (Y.K.); +82-44-995-4760 (Y.K.)
| |
Collapse
|
22
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
23
|
Neural encoding of voice pitch and formant structure at birth as revealed by frequency-following responses. Sci Rep 2021; 11:6660. [PMID: 33758251 PMCID: PMC7987955 DOI: 10.1038/s41598-021-85799-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Accepted: 03/04/2021] [Indexed: 11/22/2022] Open
Abstract
Detailed neural encoding of voice pitch and formant structure plays a crucial role in speech perception, and is of key importance for an appropriate acquisition of the phonetic repertoire in infants since birth. However, the extent to what newborns are capable of extracting pitch and formant structure information from the temporal envelope and the temporal fine structure of speech sounds, respectively, remains unclear. Here, we recorded the frequency-following response (FFR) elicited by a novel two-vowel, rising-pitch-ending stimulus to simultaneously characterize voice pitch and formant structure encoding accuracy in a sample of neonates and adults. Data revealed that newborns tracked changes in voice pitch reliably and no differently than adults, but exhibited weaker signatures of formant structure encoding, particularly at higher formant frequency ranges. Thus, our results indicate a well-developed encoding of voice pitch at birth, while formant structure representation is maturing in a frequency-dependent manner. Furthermore, we demonstrate the feasibility to assess voice pitch and formant structure encoding within clinical evaluation times in a hospital setting, and suggest the possibility to use this novel stimulus as a tool for longitudinal developmental studies of the auditory system.
Collapse
|
24
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
25
|
Neural generators of the frequency-following response elicited to stimuli of low and high frequency: A magnetoencephalographic (MEG) study. Neuroimage 2021; 231:117866. [PMID: 33592244 DOI: 10.1016/j.neuroimage.2021.117866] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/03/2023] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ascending auditory system. Seminal studies suggested the FFR as a correlate of subcortical sound encoding, yet recent studies aiming to locate its sources challenged this assumption, demonstrating that FFR receives some contribution from the auditory cortex. Based on frequency-specific phase-locking capabilities along the auditory hierarchy, we hypothesized that FFRs to higher frequencies would receive less cortical contribution than those to lower frequencies, hence supporting a major subcortical involvement for these high frequency sounds. Here, we used a magnetoencephalographic (MEG) approach to trace the neural sources of the FFR elicited in healthy adults (N = 19) to low (89 Hz) and high (333 Hz) frequency sounds. FFRs elicited to the high and low frequency sounds were clearly observable on MEG and comparable to those obtained in simultaneous electroencephalographic recordings. Distributed source modeling analyses revealed midbrain, thalamic, and cortical contributions to FFR, arranged in frequency-specific configurations. Our results showed that the main contribution to the high-frequency sound FFR originated in the inferior colliculus and the medial geniculate body of the thalamus, with no significant cortical contribution. In contrast, the low-frequency sound FFR had a major contribution located in the auditory cortices, and also received contributions originating in the midbrain and thalamic structures. These findings support the multiple generator hypothesis of the FFR and are relevant for our understanding of the neural encoding of sounds along the auditory hierarchy, suggesting a hierarchical organization of periodicity encoding.
Collapse
|
26
|
Subcortical rather than cortical sources of the frequency-following response (FFR) relate to speech-in-noise perception in normal-hearing listeners. Neurosci Lett 2021; 746:135664. [PMID: 33497718 DOI: 10.1016/j.neulet.2021.135664] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 12/22/2020] [Accepted: 01/13/2021] [Indexed: 12/27/2022]
Abstract
Scalp-recorded frequency-following responses (FFRs) reflect a mixture of phase-locked activity across the auditory pathway. FFRs have been widely used as a neural barometer of complex listening skills, especially speech-in noise (SIN) perception. Applying individually optimized source reconstruction to speech-FFRs recorded via EEG (FFREEG), we assessed the relative contributions of subcortical [auditory nerve (AN), brainstem/midbrain (BS)] and cortical [bilateral primary auditory cortex, PAC] source generators with the aim of identifying which source(s) drive the brain-behavior relation between FFRs and SIN listening skills. We found FFR strength declined precipitously from AN to PAC, consistent with diminishing phase-locking along the ascending auditory neuroaxis. FFRs to the speech fundamental (F0) were robust to noise across sources, but were largest in subcortical sources (BS > AN > PAC). PAC FFRs were only weakly observed above the noise floor and only at the low pitch of speech (F0≈100 Hz). Brain-behavior regressions revealed (i) AN and BS FFRs were sufficient to describe listeners' QuickSIN scores and (ii) contrary to neuromagnetic (MEG) FFRs, neither left nor right PAC FFREEG related to SIN performance. Our findings suggest subcortical sources not only dominate the electrical FFR but also the link between speech-FFRs and SIN processing in normal-hearing adults as observed in previous EEG studies.
Collapse
|
27
|
Kulasingham JP, Brodbeck C, Presacco A, Kuchinsky SE, Anderson S, Simon JZ. High gamma cortical processing of continuous speech in younger and older listeners. Neuroimage 2020; 222:117291. [PMID: 32835821 PMCID: PMC7736126 DOI: 10.1016/j.neuroimage.2020.117291] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 08/12/2020] [Accepted: 08/16/2020] [Indexed: 12/11/2022] Open
Abstract
Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.
Collapse
Affiliation(s)
- Joshua P Kulasingham
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States.
| | - Christian Brodbeck
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Alessandro Presacco
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Stefanie E Kuchinsky
- (c)Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States.
| | - Samira Anderson
- (d)Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States.
| | - Jonathan Z Simon
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States; (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States; (e)Department of Biology, University of Maryland, College Park, Maryland, United States.
| |
Collapse
|
28
|
Speech frequency-following response in human auditory cortex is more than a simple tracking. Neuroimage 2020; 226:117545. [PMID: 33186711 DOI: 10.1016/j.neuroimage.2020.117545] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/29/2020] [Accepted: 11/02/2020] [Indexed: 11/20/2022] Open
Abstract
The human auditory cortex is recently found to contribute to the frequency following response (FFR) and the cortical component has been shown to be more relevant to speech perception. However, it is not clear how cortical FFR may contribute to the processing of speech fundamental frequency (F0) and the dynamic pitch. Using intracranial EEG recordings, we observed a significant FFR at the fundamental frequency (F0) for both speech and speech-like harmonic complex stimuli in the human auditory cortex, even in the missing fundamental condition. Both the spectral amplitude and phase coherence of the cortical FFR showed a significant harmonic preference, and attenuated from the primary auditory cortex to the surrounding associative auditory cortex. The phase coherence of the speech FFR was found significantly higher than that of the harmonic complex stimuli, especially in the left hemisphere, showing a high timing fidelity of the cortical FFR in tracking dynamic F0 in speech. Spectrally, the frequency band of the cortical FFR was largely overlapped with the range of the human vocal pitch. Taken together, our study parsed the intrinsic properties of the cortical FFR and reveals a preference for speech-like sounds, supporting its potential role in processing speech intonation and lexical tones.
Collapse
|
29
|
Hartmann T, Weisz N. An Introduction to the Objective Psychophysics Toolbox. Front Psychol 2020; 11:585437. [PMID: 33224075 PMCID: PMC7667244 DOI: 10.3389/fpsyg.2020.585437] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 09/23/2020] [Indexed: 11/24/2022] Open
Abstract
The Psychophysics Toolbox (PTB) is one of the most popular toolboxes for the development of experimental paradigms. It is a very powerful library, providing low-level, platform independent access to the devices used in an experiment such as the graphics and the sound card. While this low-level design results in a high degree of flexibility and power, writing paradigms that interface the PTB directly might lead to code that is hard to read, maintain, reuse, and debug. Running an experiment in different facilities or organizations further requires it to work with various setups that differ in the availability of specialized hardware for response collection, triggering, and presentation of auditory stimuli. The Objective Psychophysics Toolbox (o_ptb) provides an intuitive, unified, and clear interface, built on top of the PTB that enables researchers to write readable, clean, and concise code. In addition to presenting the architecture of the o_ptb, the results of a timing accuracy test are presented. Exactly the same MATLAB code was run on two different systems, one of those using the VPixx system. Both systems showed sub-millisecond accuracy.
Collapse
Affiliation(s)
- Thomas Hartmann
- Centre for Cognitive Neuroscience and Department of Psychology, Paris-Lodron Universität Salzburg, Salzburg, Austria
| | | |
Collapse
|
30
|
López-Caballero F, Martin-Trias P, Ribas-Prats T, Gorina-Careta N, Bartrés-Faz D, Escera C. Effects of cTBS on the Frequency-Following Response and Other Auditory Evoked Potentials. Front Hum Neurosci 2020; 14:250. [PMID: 32733220 PMCID: PMC7360924 DOI: 10.3389/fnhum.2020.00250] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 06/04/2020] [Indexed: 01/22/2023] Open
Abstract
The frequency-following response (FFR) is an auditory evoked potential (AEP) that follows the periodic characteristics of a sound. Despite being a widely studied biosignal in auditory neuroscience, the neural underpinnings of the FFR are still unclear. Traditionally, FFR was associated with subcortical activity, but recent evidence suggested cortical contributions which may be dependent on the stimulus frequency. We combined electroencephalography (EEG) with an inhibitory transcranial magnetic stimulation protocol, the continuous theta burst stimulation (cTBS), to disentangle the cortical contribution to the FFR elicited to stimuli of high and low frequency. We recorded FFR to the syllable /ba/ at two fundamental frequencies (Low: 113 Hz; High: 317 Hz) in healthy participants. FFR, cortical potentials, and auditory brainstem response (ABR) were recorded before and after real and sham cTBS in the right primary auditory cortex. Results showed that cTBS did not produce a significant change in the FFR recorded, in any of the frequencies. No effect was observed in the ABR and cortical potentials, despite the latter known contributions from the auditory cortex. Possible reasons behind the negative results include compensatory mechanisms from the non-targeted areas, intraindividual variability of the cTBS effectiveness, and the particular location of our target area, the primary auditory cortex.
Collapse
Affiliation(s)
- Fran López-Caballero
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
| | - Pablo Martin-Trias
- Medical Psychology Unit, Department of Medicine, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain
| | - Teresa Ribas-Prats
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| | - Natàlia Gorina-Careta
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| | - David Bartrés-Faz
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Medical Psychology Unit, Department of Medicine, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain.,Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Carles Escera
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| |
Collapse
|
31
|
Coffey EBJ, Nicol T, White-Schwoch T, Chandrasekaran B, Krizman J, Skoe E, Zatorre RJ, Kraus N. Evolving perspectives on the sources of the frequency-following response. Nat Commun 2019; 10:5036. [PMID: 31695046 PMCID: PMC6834633 DOI: 10.1038/s41467-019-13003-w] [Citation(s) in RCA: 103] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 10/14/2019] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, 1455 Boulevard de Maisonneuve Ouest, Montréal, QC, H3G 1M8, Canada.
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Bharath Chandrasekaran
- Communication Sciences and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Forbes Tower, 3600 Atwood St, Pittsburgh, PA, 15260, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, The Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 2 Alethia Drive, Unit 1085, Storrs, CT, 06269, USA
| | - Robert J Zatorre
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
- Montreal Neurological Institute, McGill University, 3801 rue Université, Montréal, QC, H3A 2B4, Canada
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
- Department of Neurobiology, Northwestern University, 2205 Tech Dr., Evanston, IL, 60208, USA
- Department of Otolaryngology, Northwestern University, 420 E Superior St., Chicago, IL, 6011, USA
| |
Collapse
|