1
|
Commuri V, Kulasingham JP, Simon JZ. Cortical responses time-locked to continuous speech in the high-gamma band depend on selective attention. Front Neurosci 2023; 17:1264453. [PMID: 38156264 PMCID: PMC10752935 DOI: 10.3389/fnins.2023.1264453] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 11/21/2023] [Indexed: 12/30/2023] Open
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of ~40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
2
|
Commuri V, Kulasingham JP, Simon JZ. Cortical Responses Time-Locked to Continuous Speech in the High-Gamma Band Depend on Selective Attention. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.20.549567. [PMID: 37546895 PMCID: PMC10401961 DOI: 10.1101/2023.07.20.549567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
Auditory cortical responses to speech obtained by magnetoencephalography (MEG) show robust speech tracking to the speaker's fundamental frequency in the high-gamma band (70-200 Hz), but little is currently known about whether such responses depend on the focus of selective attention. In this study 22 human subjects listened to concurrent, fixed-rate, speech from male and female speakers, and were asked to selectively attend to one speaker at a time, while their neural responses were recorded with MEG. The male speaker's pitch range coincided with the lower range of the high-gamma band, whereas the female speaker's higher pitch range had much less overlap, and only at the upper end of the high-gamma band. Neural responses were analyzed using the temporal response function (TRF) framework. As expected, the responses demonstrate robust speech tracking of the fundamental frequency in the high-gamma band, but only to the male's speech, with a peak latency of approximately 40 ms. Critically, the response magnitude depends on selective attention: the response to the male speech is significantly greater when male speech is attended than when it is not attended, under acoustically identical conditions. This is a clear demonstration that even very early cortical auditory responses are influenced by top-down, cognitive, neural processing mechanisms.
Collapse
Affiliation(s)
- Vrishab Commuri
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
| | | | - Jonathan Z. Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States
- Department of Biology, University of Maryland, College Park, MD, United States
- Institute for Systems Research, University of Maryland, College Park, MD, United States
| |
Collapse
|
3
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
4
|
Easwar V, Chung L. The influence of phoneme contexts on adaptation in vowel-evoked envelope following responses. Eur J Neurosci 2022; 56:4572-4582. [PMID: 35804282 PMCID: PMC9543495 DOI: 10.1111/ejn.15768] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 02/25/2022] [Accepted: 07/06/2022] [Indexed: 11/28/2022]
Abstract
Repeated stimulus presentation leads to neural adaptation and consequent amplitude reduction in vowel-evoked envelope following responses (EFRs)-a response that reflects neural activity phase-locked to envelope periodicity. EFRs are elicited by vowels presented in isolation or in the context of other phonemes such as in syllables. While context phonemes could exert some forward influence on vowel-evoked EFRs, they may reduce the degree of adaptation. Here, we evaluated whether the properties of context phonemes between consecutive vowel stimuli influence adaptation. EFRs were elicited by the low-frequency first formant (resolved harmonics) and mid-to-high frequency second and higher formants (unresolved harmonics) of a male-spoken/i/when the presence, number, and predictability of context phonemes (/s/, /a/, /∫/, /u/) between vowel repetitions varied. Monitored over four iterations of /i/, adaptation was evident only for EFRs elicited by the unresolved harmonics. EFRs elicited by the unresolved harmonics decreased in amplitude by ~16-20 nV (10-17%) after the first presentation of/i/and remained stable thereafter. EFR adaptation was reduced by the presence of a context phoneme, but the reduction did not change with their number or predictability. The presence of a context phoneme, however, attenuated EFRs by a degree similar to that caused by adaptation (~21-23 nV). Such a trade-off in the short- and long-term influence of context phonemes suggests that the benefit of interleaving EFR-eliciting vowels with other context phonemes depends on whether the use of consonant-vowel syllables is critical to improve the validity of EFR applications.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Department of Communication Sciences & Disorders, University of Wisconsin-Madison, Madison, USA.,Waisman Center, University of Wisconsin-Madison, Madison, USA
| | - Lauren Chung
- Department of Communication Sciences & Disorders, University of Wisconsin-Madison, Madison, USA.,Waisman Center, University of Wisconsin-Madison, Madison, USA
| |
Collapse
|
5
|
Clayson PE, Joshi YB, Thomas ML, Tarasenko M, Bismark A, Sprock J, Nungaray J, Cardoso L, Wynn JK, Swerdlow NR, Light GA. The viability of the frequency following response characteristics for use as biomarkers of cognitive therapeutics in schizophrenia. Schizophr Res 2022; 243:372-382. [PMID: 34187732 DOI: 10.1016/j.schres.2021.06.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/02/2020] [Revised: 06/03/2021] [Accepted: 06/21/2021] [Indexed: 02/07/2023]
Abstract
Deficits in early auditory information processing contribute to cognitive and psychosocial disability; this has prompted development of interventions that target low-level auditory processing, which may alleviate these disabilities. The frequency following response (FFR) is a constellation of event-related potential and frequency characteristics that reflect the processing of acoustic stimuli at the level of the brainstem and ascending portions of the auditory pathway. While FFR is a promising candidate biomarker of response to auditory-based cognitive training interventions, the psychometric properties of FFR in schizophrenia patients have not been studied. Here we assessed the psychometric reliability and magnitude of group differences across 18 different FFR parameters to determine which of these parameters demonstrate adequate internal consistency. Electroencephalography from 40 schizophrenia patients and 40 nonpsychiatric comparison subjects was recorded during rapid presentation of an auditory speech stimulus (6000 trials). Patients showed normal response amplitudes but longer latencies for most FFR peaks and lower signal-to-noise ratios (SNRs) than healthy subjects. Analysis of amplitude and latency estimates of peaks, however, indicated a need for a substantial increase in task length to obtain internal consistency estimates above 0.80. In contrast, excellent internal consistency (>0.95) was shown for FFR sustained responses. Only SNR scores reflecting the FFR sustained response yielded significant group differences and excellent internal consistency, suggesting that this measure is a viable candidate for use in clinical treatment studies. The present study highlights the use of internal consistency estimates to select FFR characteristics for use in future intervention studies interested in individual differences among patients.
Collapse
Affiliation(s)
- Peter E Clayson
- Department of Psychology, University of South Florida, Tampa, FL, USA.
| | - Yash B Joshi
- VISN 22 Mental Illness Research, Education, & Clinical Center (MIRECC), San Diego VA Healthcare System, San Diego, CA, USA; Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Michael L Thomas
- Department of Psychology, Colorado State University, Fort Collins, CO, USA
| | - Melissa Tarasenko
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA; VA San Diego Healthcare System, USA
| | - Andrew Bismark
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA; VA San Diego Healthcare System, USA
| | - Joyce Sprock
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - John Nungaray
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Lauren Cardoso
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Jonathan K Wynn
- Veterans Affairs Greater Los Angeles Healthcare System, Los Angeles, CA, USA; Department of Psychiatry and Biobehavioral Sciences, David Geffen School of Medicine, University of California Los Angeles, Los Angeles, CA, USA
| | - Neal R Swerdlow
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Gregory A Light
- VISN 22 Mental Illness Research, Education, & Clinical Center (MIRECC), San Diego VA Healthcare System, San Diego, CA, USA; Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| |
Collapse
|
6
|
Cheng FY, Xu C, Gold L, Smith S. Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech. Front Neurosci 2022; 15:747303. [PMID: 34987356 PMCID: PMC8721138 DOI: 10.3389/fnins.2021.747303] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 12/02/2021] [Indexed: 01/15/2023] Open
Abstract
The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWSbefore and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.
Collapse
Affiliation(s)
- Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Lisa Gold
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
7
|
Clayson PE, Molina JL, Joshi YB, Thomas ML, Sprock J, Nungaray J, Swerdlow NR, Light GA. Evaluation of the frequency following response as a predictive biomarker of response to cognitive training in schizophrenia. Psychiatry Res 2021; 305:114239. [PMID: 34673326 DOI: 10.1016/j.psychres.2021.114239] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 10/01/2021] [Accepted: 10/09/2021] [Indexed: 11/28/2022]
Abstract
Neurophysiological biomarkers of auditory processing show promise predicting outcomes following auditory-based targeted cognitive training (TCT) in schizophrenia, but the viability of the frequency following response (FFR) as a biomarker has yet to be examined, despite its ecological and face validity for auditory-based interventions. FFR is an event-related potential (ERP) that reflects early auditory processing. We predicted that schizophrenia patients would show acute- and longer-term FFR malleability in the context of TCT. Patients were randomized to either TCT (n = 30) or treatment as usual (TAU; n = 22), and electroencephalography was recorded during rapid presentation of an auditory speech stimulus before treatment, after one hour of training, and after 30 h of training. Whereas patients in the TCT group did not show changes in FFR after training, amplitude reductions were observed in the TAU. FFR was positively associated with performance on a measure of single word-in-noise perception in the TCT group, and with a measure of sentence-in-noise perception in both groups. Psychometric reliability analyses of FFR scores indicated high internal consistency but low one-hour and 12-week test-rest reliability. These findings support the dissociation between measures of speech discriminability along the hierarchy of cortical and subcortical early auditory information processing in schizophrenia.
Collapse
Affiliation(s)
- Peter E Clayson
- Department of Psychology, University of South Florida, University of California San Diego, 9500 Gilman Drive #0804 La Jolla, Tampa, CA 92093, USA
| | - Juan L Molina
- VISN 22 Mental Illness Research, Education and Clinical Center (MIRECC), San Diego VA Healthcare System, San Diego, CA, USA
| | - Yash B Joshi
- VISN 22 Mental Illness Research, Education and Clinical Center (MIRECC), San Diego VA Healthcare System, San Diego, CA, USA; Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Michael L Thomas
- Department of Psychology, Colorado State University, Fort Collins, CO, USA
| | - Joyce Sprock
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - John Nungaray
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Neal R Swerdlow
- Department of Psychiatry, University of California San Diego, San Diego, CA, USA
| | - Gregory A Light
- VISN 22 Mental Illness Research, Education and Clinical Center (MIRECC), San Diego VA Healthcare System, San Diego, CA, USA; Department of Psychiatry, University of California San Diego, San Diego, CA, USA.
| |
Collapse
|
8
|
Zhang X, Gong Q. Context-dependent Plasticity and Strength of Subcortical Encoding of Musical Sounds Independently Underlie Pitch Discrimination for Music Melodies. Neuroscience 2021; 472:68-89. [PMID: 34358631 DOI: 10.1016/j.neuroscience.2021.07.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 10/20/2022]
Abstract
Subcortical auditory nuclei contribute to pitch perception, but how subcortical sound encoding is related to pitch processing for music perception remains unclear. Conventionally, enhanced subcortical sound encoding is considered underlying superior pitch discrimination. However, associations between superior auditory perception and the context-dependent plasticity of subcortical sound encoding are also documented. Here, we explored the subcortical neural correlates to music pitch perception by analyzing frequency-following responses (FFRs) to musical sounds presented in a predictable context and a random context. We found that the FFR inter-trial phase-locking (ITPL) was negatively correlated with behavioral performances of discrimination of pitches in music melodies. It was also negatively correlated with the plasticity indices measuring the variability of FFRs to physically identical sounds between the two contexts. The plasticity indices were consistently positively correlated with pitch discrimination performances, suggesting the subcortical context-dependent plasticity underlying music pitch perception. Moreover, the raw FFR spectral strength was not significantly correlated with pitch discrimination performances. However, it was positively correlated with behavioral performances when the FFR ITPL was controlled by partial correlations, suggesting that the strength of subcortical sound encoding underlies music pitch perception. When the spectral strength was controlled by partial correlations, the negative ITPL-behavioral correlations were maintained. Furthermore, the FFR ITPL, the plasticity indices, and the FFR spectral strength were more correlated with pitch than with rhythm discrimination performances. These findings suggest that the context-dependent plasticity and the strength of subcortical encoding of musical sounds are independently and perhaps specifically associated with pitch perception for music melodies.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Medicine, Shanghai University, Shanghai, China.
| |
Collapse
|
9
|
Neural generators of the frequency-following response elicited to stimuli of low and high frequency: A magnetoencephalographic (MEG) study. Neuroimage 2021; 231:117866. [PMID: 33592244 DOI: 10.1016/j.neuroimage.2021.117866] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/03/2023] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ascending auditory system. Seminal studies suggested the FFR as a correlate of subcortical sound encoding, yet recent studies aiming to locate its sources challenged this assumption, demonstrating that FFR receives some contribution from the auditory cortex. Based on frequency-specific phase-locking capabilities along the auditory hierarchy, we hypothesized that FFRs to higher frequencies would receive less cortical contribution than those to lower frequencies, hence supporting a major subcortical involvement for these high frequency sounds. Here, we used a magnetoencephalographic (MEG) approach to trace the neural sources of the FFR elicited in healthy adults (N = 19) to low (89 Hz) and high (333 Hz) frequency sounds. FFRs elicited to the high and low frequency sounds were clearly observable on MEG and comparable to those obtained in simultaneous electroencephalographic recordings. Distributed source modeling analyses revealed midbrain, thalamic, and cortical contributions to FFR, arranged in frequency-specific configurations. Our results showed that the main contribution to the high-frequency sound FFR originated in the inferior colliculus and the medial geniculate body of the thalamus, with no significant cortical contribution. In contrast, the low-frequency sound FFR had a major contribution located in the auditory cortices, and also received contributions originating in the midbrain and thalamic structures. These findings support the multiple generator hypothesis of the FFR and are relevant for our understanding of the neural encoding of sounds along the auditory hierarchy, suggesting a hierarchical organization of periodicity encoding.
Collapse
|
10
|
Wang L, Noordanus E, van Opstal AJ. Estimating multiple latencies in the auditory system from auditory steady-state responses on a single EEG channel. Sci Rep 2021; 11:2150. [PMID: 33495484 PMCID: PMC7835249 DOI: 10.1038/s41598-021-81232-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Accepted: 01/05/2021] [Indexed: 01/30/2023] Open
Abstract
The latency of the auditory steady-state response (ASSR) may provide valuable information regarding the integrity of the auditory system, as it could potentially reveal the presence of multiple intracerebral sources. To estimate multiple latencies from high-order ASSRs, we propose a novel two-stage procedure that consists of a nonparametric estimation method, called apparent latency from phase coherence (ALPC), followed by a heuristic sequential forward selection algorithm (SFS). Compared with existing methods, ALPC-SFS requires few prior assumptions, and is straightforward to implement for higher-order nonlinear responses to multi-cosine sound complexes with their initial phases set to zero. It systematically evaluates the nonlinear components of the ASSRs by estimating multiple latencies, automatically identifies involved ASSR components, and reports a latency consistency index. To verify the proposed method, we performed simulations for several scenarios: two nonlinear subsystems with different or overlapping outputs. We compared the results from our method with predictions from existing, parametric methods. We also recorded the EEG from ten normal-hearing adults by bilaterally presenting superimposed tones with four frequencies that evoke a unique set of ASSRs. From these ASSRs, two major latencies were found to be stable across subjects on repeated measurement days. The two latencies are dominated by low-frequency (LF) (near 40 Hz, at around 41-52 ms) and high-frequency (HF) (> 80 Hz, at around 21-27 ms) ASSR components. The frontal-central brain region showed longer latencies on LF components, but shorter latencies on HF components, when compared with temporal-lobe regions. In conclusion, the proposed nonparametric ALPC-SFS method, applied to zero-phase, multi-cosine sound complexes is more suitable for evaluating embedded nonlinear systems underlying ASSRs than existing methods. It may therefore be a promising objective measure for hearing performance and auditory cortex (dys)function.
Collapse
Affiliation(s)
- Lei Wang
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands.
| | - Elisabeth Noordanus
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| | - A John van Opstal
- Department of Biophysics, Radboud University, Nijmegen, 6525 AJ, The Netherlands
- Donders Centre for Neuroscience, Radboud University, Nijmegen, 6525 AJ, The Netherlands
| |
Collapse
|
11
|
Losorelli S, Kaneshiro B, Musacchia GA, Blevins NH, Fitzgerald MB. Factors influencing classification of frequency following responses to speech and music stimuli. Hear Res 2020; 398:108101. [PMID: 33142106 DOI: 10.1016/j.heares.2020.108101] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/10/2019] [Revised: 09/25/2020] [Accepted: 10/19/2020] [Indexed: 01/08/2023]
Abstract
Successful mapping of meaningful labels to sound input requires accurate representation of that sound's acoustic variances in time and spectrum. For some individuals, such as children or those with hearing loss, having an objective measure of the integrity of this representation could be useful. Classification is a promising machine learning approach which can be used to objectively predict a stimulus label from the brain response. This approach has been previously used with auditory evoked potentials (AEP) such as the frequency following response (FFR), but a number of key issues remain unresolved before classification can be translated into clinical practice. Specifically, past efforts at FFR classification have used data from a given subject for both training and testing the classifier. It is also unclear which components of the FFR elicit optimal classification accuracy. To address these issues, we recorded FFRs from 13 adults with normal hearing in response to speech and music stimuli. We compared labeling accuracy of two cross-validation classification approaches using FFR data: (1) a more traditional method combining subject data in both the training and testing set, and (2) a "leave-one-out" approach, in which subject data is classified based on a model built exclusively from the data of other individuals. We also examined classification accuracy on decomposed and time-segmented FFRs. Our results indicate that the accuracy of leave-one-subject-out cross validation approaches that obtained in the more conventional cross-validation classifications while allowing a subject's results to be analysed with respect to normative data pooled from a separate population. In addition, we demonstrate that classification accuracy is highest when the entire FFR is used to train the classifier. Taken together, these efforts contribute key steps toward translation of classification-based machine learning approaches into clinical practice.
Collapse
Affiliation(s)
- Steven Losorelli
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA.
| | - Blair Kaneshiro
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA.
| | - Gabriella A Musacchia
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA; Department of Audiology, University of the Pacific, San Francisco, CA, USA.
| | - Nikolas H Blevins
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA.
| | - Matthew B Fitzgerald
- Department of Otolaryngology Head and Neck Surgery, Stanford University School of Medicine, Palo Alto, CA, USA.
| |
Collapse
|
12
|
Musical expertise facilitates statistical learning of rhythm and the perceptive uncertainty: A cross-cultural study. Neuropsychologia 2020; 146:107553. [DOI: 10.1016/j.neuropsychologia.2020.107553] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2019] [Revised: 07/01/2020] [Accepted: 07/01/2020] [Indexed: 12/11/2022]
|
13
|
Tecoulesco L, Skoe E, Naigles LR. Phonetic discrimination mediates the relationship between auditory brainstem response stability and syntactic performance. BRAIN AND LANGUAGE 2020; 208:104810. [PMID: 32683226 DOI: 10.1016/j.bandl.2020.104810] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 02/03/2020] [Accepted: 04/27/2020] [Indexed: 06/11/2023]
Abstract
Syntactic, lexical, and phonological/phonetic knowledge are vital aspects of macro level language ability. Prior research has predominantly focused on environmental or cortical sources of individual differences in these areas; however, a growing literature suggests an auditory brainstem contribution to language performance in both typically developing (TD) populations and children with autism spectrum disorder (ASD). This study investigates whether one aspect of auditory brainstem responses (ABRs), neural response stability, which is a metric reflecting trial-by-trial consistency in the neural encoding of sound, can predict syntactic, lexical, and phonetic performance in TD and ASD school-aged children. Pooling across children with ASD and TD, results showed that higher neural stability in response to the syllable /da/ was associated with better phonetic discrimination, and with better syntactic performance on a standardized measure. Furthermore, phonetic discrimination was a successful mediator of the relationship between neural stability and syntactic performance. This study supports the growing body of literature that stable subcortical neural encoding of sound is important for successful language performance.
Collapse
Affiliation(s)
- Lisa Tecoulesco
- University of Connecticut Psychological Sciences, United States.
| | - Erika Skoe
- University of Connecticut, Speech Language and Hearing Sciences, United States
| | | |
Collapse
|
14
|
López-Caballero F, Martin-Trias P, Ribas-Prats T, Gorina-Careta N, Bartrés-Faz D, Escera C. Effects of cTBS on the Frequency-Following Response and Other Auditory Evoked Potentials. Front Hum Neurosci 2020; 14:250. [PMID: 32733220 PMCID: PMC7360924 DOI: 10.3389/fnhum.2020.00250] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Accepted: 06/04/2020] [Indexed: 01/22/2023] Open
Abstract
The frequency-following response (FFR) is an auditory evoked potential (AEP) that follows the periodic characteristics of a sound. Despite being a widely studied biosignal in auditory neuroscience, the neural underpinnings of the FFR are still unclear. Traditionally, FFR was associated with subcortical activity, but recent evidence suggested cortical contributions which may be dependent on the stimulus frequency. We combined electroencephalography (EEG) with an inhibitory transcranial magnetic stimulation protocol, the continuous theta burst stimulation (cTBS), to disentangle the cortical contribution to the FFR elicited to stimuli of high and low frequency. We recorded FFR to the syllable /ba/ at two fundamental frequencies (Low: 113 Hz; High: 317 Hz) in healthy participants. FFR, cortical potentials, and auditory brainstem response (ABR) were recorded before and after real and sham cTBS in the right primary auditory cortex. Results showed that cTBS did not produce a significant change in the FFR recorded, in any of the frequencies. No effect was observed in the ABR and cortical potentials, despite the latter known contributions from the auditory cortex. Possible reasons behind the negative results include compensatory mechanisms from the non-targeted areas, intraindividual variability of the cTBS effectiveness, and the particular location of our target area, the primary auditory cortex.
Collapse
Affiliation(s)
- Fran López-Caballero
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain
| | - Pablo Martin-Trias
- Medical Psychology Unit, Department of Medicine, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain
| | - Teresa Ribas-Prats
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| | - Natàlia Gorina-Careta
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| | - David Bartrés-Faz
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Medical Psychology Unit, Department of Medicine, Faculty of Medicine and Health Sciences, University of Barcelona, Barcelona, Spain.,Institut d'Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS), Barcelona, Spain
| | - Carles Escera
- Institute of Neurosciences, University of Barcelona, Barcelona, Spain.,Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, Barcelona, Spain.,Institut de Recerca Sant Joan de Déu (IRSJD), Barcelona, Spain
| |
Collapse
|
15
|
Riecke L, Marianu IA, De Martino F. Effect of Auditory Predictability on the Human Peripheral Auditory System. Front Neurosci 2020; 14:362. [PMID: 32351361 PMCID: PMC7174672 DOI: 10.3389/fnins.2020.00362] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 03/24/2020] [Indexed: 11/13/2022] Open
Abstract
Auditory perception is facilitated by prior knowledge about the statistics of the acoustic environment. Predictions about upcoming auditory stimuli are processed at various stages along the human auditory pathway, including the cortex and midbrain. Whether such auditory predictions are processed also at hierarchically lower stages-in the peripheral auditory system-is unclear. To address this question, we assessed outer hair cell (OHC) activity in response to isochronous tone sequences and varied the predictability and behavioral relevance of the individual tones (by manipulating tone-to-tone probabilities and the human participants' task, respectively). We found that predictability alters the amplitude of distortion-product otoacoustic emissions (DPOAEs, a measure of OHC activity) in a manner that depends on the behavioral relevance of the tones. Simultaneously recorded cortical responses showed a significant effect of both predictability and behavioral relevance of the tones, indicating that their experimental manipulations were effective in central auditory processing stages. Our results provide evidence for a top-down effect on the processing of auditory predictability in the human peripheral auditory system, in line with previous studies showing peripheral effects of auditory attention.
Collapse
Affiliation(s)
- Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Irina-Andreea Marianu
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands
| | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Netherlands.,Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, MN, United States
| |
Collapse
|
16
|
Font-Alaminos M, Cornella M, Costa-Faidella J, Hervás A, Leung S, Rueda I, Escera C. Increased subcortical neural responses to repeating auditory stimulation in children with autism spectrum disorder. Biol Psychol 2020; 149:107807. [DOI: 10.1016/j.biopsycho.2019.107807] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Revised: 10/29/2019] [Accepted: 10/30/2019] [Indexed: 01/12/2023]
|
17
|
Coffey EBJ, Nicol T, White-Schwoch T, Chandrasekaran B, Krizman J, Skoe E, Zatorre RJ, Kraus N. Evolving perspectives on the sources of the frequency-following response. Nat Commun 2019; 10:5036. [PMID: 31695046 PMCID: PMC6834633 DOI: 10.1038/s41467-019-13003-w] [Citation(s) in RCA: 103] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 10/14/2019] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, 1455 Boulevard de Maisonneuve Ouest, Montréal, QC, H3G 1M8, Canada.
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Bharath Chandrasekaran
- Communication Sciences and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Forbes Tower, 3600 Atwood St, Pittsburgh, PA, 15260, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, The Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 2 Alethia Drive, Unit 1085, Storrs, CT, 06269, USA
| | - Robert J Zatorre
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
- Montreal Neurological Institute, McGill University, 3801 rue Université, Montréal, QC, H3A 2B4, Canada
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
- Department of Neurobiology, Northwestern University, 2205 Tech Dr., Evanston, IL, 60208, USA
- Department of Otolaryngology, Northwestern University, 420 E Superior St., Chicago, IL, 6011, USA
| |
Collapse
|
18
|
Ross B, Tremblay KL, Alain C. Simultaneous EEG and MEG recordings reveal vocal pitch elicited cortical gamma oscillations in young and older adults. Neuroimage 2019; 204:116253. [PMID: 31600592 DOI: 10.1016/j.neuroimage.2019.116253] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2019] [Revised: 09/13/2019] [Accepted: 10/06/2019] [Indexed: 10/25/2022] Open
Abstract
The frequency-following response with origin in the auditory brainstem represents the pitch contour of voice and can be recorded with electrodes from the scalp. MEG studies also revealed a cortical contribution to the high gamma oscillations at the fundamental frequency (f0) of a vowel stimulus. Therefore, studying the cortical component of the frequency-following response could provide insights into how pitch information is encoded at the cortical level. Comparing how aging affects the different responses may help to uncover the neural mechanisms underlying speech understanding deficits in older age. We simultaneously recorded EEG and MEG responses to the syllable /ba/. MEG beamformer analysis localized sources in bilateral auditory cortices and the midbrain. Time-frequency analysis showed a faithful representation of the pitch contour between 106 Hz and 138 Hz in the cortical activity. A cross-correlation revealed a latency of 20 ms. Furthermore, stimulus onsets elicited cortical 40-Hz responses. Both the 40-Hz and the f0 response amplitudes increased in older age and were larger in the right hemisphere. The effects of aging and laterality of the f0 response were evident in the MEG only, suggesting that both effects were characteristics of the cortical response. After comparing f0 and N1 responses in EEG and MEG, we estimated that approximately one-third of the scalp-recorded f0 response could be cortical in origin. We attributed the significance of the cortical f0 response to the precise timing of cortical neurons that serve as a time-sensitive code for pitch.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department for Medical Biophysics, University of Toronto, Ontario, Canada.
| | - Kelly L Tremblay
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department of Psychology, University of Toronto, Ontario, Canada
| |
Collapse
|
19
|
Krizman J, Kraus N. Analyzing the FFR: A tutorial for decoding the richness of auditory function. Hear Res 2019; 382:107779. [PMID: 31505395 PMCID: PMC6778514 DOI: 10.1016/j.heares.2019.107779] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 08/01/2019] [Accepted: 08/06/2019] [Indexed: 01/12/2023]
Abstract
The frequency-following response, or FFR, is a neurophysiological response to sound that precisely reflects the ongoing dynamics of sound. It can be used to study the integrity and malleability of neural encoding of sound across the lifespan. Sound processing in the brain can be impaired with pathology and enhanced through expertise. The FFR can index linguistic deprivation, autism, concussion, and reading impairment, and can reflect the impact of enrichment with short-term training, bilingualism, and musicianship. Because of this vast potential, interest in the FFR has grown considerably in the decade since our first tutorial. Despite its widespread adoption, there remains a gap in the current knowledge of its analytical potential. This tutorial aims to bridge this gap. Using recording methods we have employed for the last 20 + years, we have explored many analysis strategies. In this tutorial, we review what we have learned and what we think constitutes the most effective ways of capturing what the FFR can tell us. The tutorial covers FFR components (timing, fundamental frequency, harmonics) and factors that influence FFR (stimulus polarity, response averaging, and stimulus presentation/recording jitter). The spotlight is on FFR analyses, including ways to analyze FFR timing (peaks, autocorrelation, phase consistency, cross-phaseogram), magnitude (RMS, SNR, FFT), and fidelity (stimulus-response correlations, response-to-response correlations and response consistency). The wealth of information contained within an FFR recording brings us closer to understanding how the brain reconstructs our sonic world.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA. https://www.brainvolts.northwestern.edu
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA; Department of Neurobiology, Northwestern University, Evanston, IL, 60208, USA.
| |
Collapse
|
20
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
21
|
Carbajal GV, Malmierca MS. The Neuronal Basis of Predictive Coding Along the Auditory Pathway: From the Subcortical Roots to Cortical Deviance Detection. Trends Hear 2019; 22:2331216518784822. [PMID: 30022729 PMCID: PMC6053868 DOI: 10.1177/2331216518784822] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
In this review, we attempt to integrate the empirical evidence regarding stimulus-specific adaptation (SSA) and mismatch negativity (MMN) under a predictive coding perspective (also known as Bayesian or hierarchical-inference model). We propose a renewed methodology for SSA study, which enables a further decomposition of deviance detection into repetition suppression and prediction error, thanks to the use of two controls previously introduced in MMN research: the many-standards and the cascade sequences. Focusing on data obtained with cellular recordings, we explain how deviance detection and prediction error are generated throughout hierarchical levels of processing, following two vectors of increasing computational complexity and abstraction along the auditory neuraxis: from subcortical toward cortical stations and from lemniscal toward nonlemniscal divisions. Then, we delve into the particular characteristics and contributions of subcortical and cortical structures to this generative mechanism of hierarchical inference, analyzing what is known about the role of neuromodulation and local microcircuitry in the emergence of mismatch signals. Finally, we describe how SSA and MMN are occurring at similar time frame and cortical locations, and both are affected by the manipulation of N-methyl- D-aspartate receptors. We conclude that there is enough empirical evidence to consider SSA and MMN, respectively, as the microscopic and macroscopic manifestations of the same physiological mechanism of deviance detection in the auditory cortex. Hence, the development of a common theoretical framework for SSA and MMN is all the more recommendable for future studies. In this regard, we suggest a shared nomenclature based on the predictive coding interpretation of deviance detection.
Collapse
Affiliation(s)
- Guillermo V Carbajal
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain
| | - Manuel S Malmierca
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain.,3 Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Spain
| |
Collapse
|
22
|
Jacobi I, Sheikh Rashid M, de Laat JAPM, Dreschler WA. Age Dependence of Thresholds for Speech in Noise in Normal-Hearing Adolescents. Trends Hear 2019; 21:2331216517743641. [PMID: 29212433 PMCID: PMC5724638 DOI: 10.1177/2331216517743641] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Previously found effects of age on thresholds for speech reception thresholds in noise in adolescents as measured by an online screening survey require further study in a well-controlled teenage sample. Speech reception thresholds (SRT) of 72 normal-hearing adolescent students were analyzed by means of the online speech-in-noise screening tool Earcheck (In Dutch: Oorcheck). Screening was performed at school and included pure-tone audiometry to ensure normal-hearing thresholds. The students’ ages ranged from 12 to 17 years. A group of young adults was included as a control group. Data were controlled for effects of gender and level of education. SRT scores within the controlled teenage sample revealed an effect of age on the order of an improvement of −0.2 dB per year. Effects of level of education and gender were not significant. Hearing screening tools that are based on SRT for speech in noise should control for an effect of age when assessing adolescents. Based on the present data, a correction factor of −0.2 dB per year between the ages of 12 and 17 is proposed. The proposed age-corrected SRT cut-off scores need to be evaluated in a larger sample including hearing-impaired adolescents.
Collapse
Affiliation(s)
- Irene Jacobi
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| | - Marya Sheikh Rashid
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| | - Jan A P M de Laat
- 2 Department of Audiology, 4501 Leiden University Medical Centre , Leiden, The Netherlands
| | - Wouter A Dreschler
- 1 Department of Clinical and Experimental Audiology, 26066 Academic Medical Centre , Amsterdam, The Netherlands
| |
Collapse
|
23
|
Malmierca MS, Niño-Aguillón BE, Nieto-Diego J, Porteros Á, Pérez-González D, Escera C. Pattern-sensitive neurons reveal encoding of complex auditory regularities in the rat inferior colliculus. Neuroimage 2019; 184:889-900. [DOI: 10.1016/j.neuroimage.2018.10.012] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2018] [Revised: 09/20/2018] [Accepted: 10/04/2018] [Indexed: 10/28/2022] Open
|
24
|
Tracing the Trajectory of Sensory Plasticity across Different Stages of Speech Learning in Adulthood. Curr Biol 2018; 28:1419-1427.e4. [PMID: 29681473 DOI: 10.1016/j.cub.2018.03.026] [Citation(s) in RCA: 47] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2017] [Revised: 01/17/2018] [Accepted: 03/14/2018] [Indexed: 12/11/2022]
Abstract
Although challenging, adults can learn non-native phonetic contrasts with extensive training [1, 2], indicative of perceptual learning beyond an early sensitivity period [3, 4]. Training can alter low-level sensory encoding of newly acquired speech sound patterns [5]; however, the time-course, behavioral relevance, and long-term retention of such sensory plasticity is unclear. Some theories argue that sensory plasticity underlying signal enhancement is immediate and critical to perceptual learning [6, 7]. Others, like the reverse hierarchy theory (RHT), posit a slower time-course for sensory plasticity [8]. RHT proposes that higher-level categorical representations guide immediate, novice learning, while lower-level sensory changes do not emerge until expert stages of learning [9]. We trained 20 English-speaking adults to categorize a non-native phonetic contrast (Mandarin lexical tones) using a criterion-dependent sound-to-category training paradigm. Sensory and perceptual indices were assayed across operationally defined learning phases (novice, experienced, over-trained, and 8-week retention) by measuring the frequency-following response, a neurophonic potential that reflects fidelity of sensory encoding, and the perceptual identification of a tone continuum. Our results demonstrate that while robust changes in sensory encoding and perceptual identification of Mandarin tones emerged with training and were retained, such changes followed different timescales. Sensory changes were evidenced and related to behavioral performance only when participants were over-trained. In contrast, changes in perceptual identification reflecting improvement in categorical percept emerged relatively earlier. Individual differences in perceptual identification, and not sensory encoding, related to faster learning. Our findings support the RHT-sensory plasticity accompanies, rather than drives, expert levels of non-native speech learning.
Collapse
|
25
|
Bidelman GM. Sonification of scalp-recorded frequency-following responses (FFRs) offers improved response detection over conventional statistical metrics. J Neurosci Methods 2018; 293:59-66. [DOI: 10.1016/j.jneumeth.2017.09.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Revised: 08/15/2017] [Accepted: 09/12/2017] [Indexed: 11/30/2022]
|
26
|
Paraskevopoulos E, Chalas N, Bamidis P. Functional connectivity of the cortical network supporting statistical learning in musicians and non-musicians: an MEG study. Sci Rep 2017; 7:16268. [PMID: 29176557 PMCID: PMC5701139 DOI: 10.1038/s41598-017-16592-y] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2017] [Accepted: 11/14/2017] [Indexed: 01/18/2023] Open
Abstract
Statistical learning is a cognitive process of great importance for the detection and representation of environmental regularities. Complex cognitive processes such as statistical learning usually emerge as a result of the activation of widespread cortical areas functioning in dynamic networks. The present study investigated the cortical large-scale network supporting statistical learning of tone sequences in humans. The reorganization of this network related to musical expertise was assessed via a cross-sectional comparison of a group of musicians to a group of non-musicians. The cortical responses to a statistical learning paradigm incorporating an oddball approach were measured via Magnetoencephalographic (MEG) recordings. Large-scale connectivity of the cortical activity was calculated via a statistical comparison of the estimated transfer entropy in the sources' activity. Results revealed the functional architecture of the network supporting the processing of statistical learning, highlighting the prominent role of informational processing pathways that bilaterally connect superior temporal and intraparietal sources with the left IFG. Musical expertise is related to extensive reorganization of this network, as the group of musicians showed a network comprising of more widespread and distributed cortical areas as well as enhanced global efficiency and increased contribution of additional temporal and frontal sources in the information processing pathway.
Collapse
Affiliation(s)
- Evangelos Paraskevopoulos
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece.
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, P.C., D-48149, Münster, Germany.
| | - Nikolas Chalas
- School of Biology, Faculty of Science, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, P.C., 54124, Thessaloniki, Greece
| |
Collapse
|
27
|
Elmer S, Hausheer M, Albrecht J, Kühnis J. Human Brainstem Exhibits higher Sensitivity and Specificity than Auditory-Related Cortex to Short-Term Phonetic Discrimination Learning. Sci Rep 2017; 7:7455. [PMID: 28785043 PMCID: PMC5547112 DOI: 10.1038/s41598-017-07426-y] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2017] [Accepted: 06/28/2017] [Indexed: 01/09/2023] Open
Abstract
Phonetic discrimination learning is an active perceptual process that operates under the influence of cognitive control mechanisms by increasing the sensitivity of the auditory system to the trained stimulus attributes. It is assumed that the auditory cortex and the brainstem interact in order to refine how sounds are transcribed into neural codes. Here, we evaluated whether these two computational entities are prone to short-term functional changes, whether there is a chronological difference in malleability, and whether short-term training suffices to alter reciprocal interactions. We performed repeated cortical (i.e., mismatch negativity responses, MMN) and subcortical (i.e., frequency-following response, FFR) EEG measurements in two groups of participants who underwent one hour of phonetic discrimination training or were passively exposed to the same stimulus material. The training group showed a distinctive brainstem energy reduction in the trained frequency-range (i.e., first formant), whereas the passive group did not show any response modulation. Notably, brainstem signal change correlated with the behavioral improvement during training, this result indicating a close relationship between behavior and underlying brainstem physiology. Since we did not reveal group differences in MMN responses, results point to specific short-term brainstem changes that precede functional alterations in the auditory cortex.
Collapse
Affiliation(s)
- Stefan Elmer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland.
| | - Marcela Hausheer
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Joëlle Albrecht
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| | - Jürg Kühnis
- Auditory Research Group Zurich (ARGZ), Division Neuropsychology, Institute of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
28
|
Involvement of the Serotonin Transporter Gene in Accurate Subcortical Speech Encoding. J Neurosci 2017; 36:10782-10790. [PMID: 27798133 DOI: 10.1523/jneurosci.1595-16.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 08/27/2016] [Indexed: 11/21/2022] Open
Abstract
A flourishing line of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as being shaped by acoustic, linguistic, and musical experience and training. And while the heritability of auditory speech as well as nonspeech processing has been suggested, the genetic determinants of subcortical speech processing have not yet been uncovered. Here, we postulated that the serotonin transporter-linked polymorphic region (5-HTTLPR), a common functional polymorphism located in the promoter region of the serotonin transporter gene (SLC6A4), is implicated in speech encoding in the human subcortical auditory pathway. Serotonin has been shown as essential for modulating the brain response to sound both cortically and subcortically, yet the genetic factors regulating this modulation regarding speech sounds have not been disclosed. We recorded the frequency following response, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and cortical evoked potentials in 58 participants elicited to the syllable /ba/, which was presented >2000 times. Participants with low serotonin transporter expression had higher signal-to-noise ratios as well as a higher pitch strength representation of the periodic part of the syllable than participants with medium to high expression, possibly by tuning synaptic activity to the stimulus features and hence a more efficient suppression of noise. These results imply the 5-HTTLPR in subcortical auditory speech encoding and add an important, genetically determined layer to the factors shaping the human subcortical response to speech sounds. SIGNIFICANCE STATEMENT The accurate encoding of speech sounds in the subcortical auditory nervous system is of paramount relevance for human communication, and it has been shown to be altered in different disorders of speech and auditory processing. Importantly, this encoding is plastic and can therefore be enhanced by language and music experience. Whether genetic factors play a role in speech encoding at the subcortical level remains unresolved. Here we show that a common polymorphism in the serotonin transporter gene relates to an accurate and robust neural tracking of speech stimuli in the subcortical auditory pathway. This indicates that serotonin transporter expression, eventually in combination with other polymorphisms, delimits the extent to which lifetime experience shapes the subcortical encoding of speech.
Collapse
|
29
|
Neural representations of concurrent sounds with overlapping spectra in rat inferior colliculus: Comparisons between temporal-fine structure and envelope. Hear Res 2017; 353:87-96. [PMID: 28655419 DOI: 10.1016/j.heares.2017.06.005] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/19/2017] [Revised: 05/21/2017] [Accepted: 06/12/2017] [Indexed: 11/24/2022]
Abstract
Perceptual segregation of multiple sounds, which overlap in both time and spectra, into individual auditory streams is critical for hearing in natural environments. Some cues such as interaural time disparities (ITDs) play an important role in the segregation, especially when sounds are separated in space. In this study, we investigated the neural representation of two uncorrelated narrowband noises that shared the identical spectrum in the rat inferior colliculus (IC) using frequency-following-response (FFR) recordings, when the ITD for each noise stimulus was manipulated. The results of this study showed that recorded FFRs exhibited two distinctive components: the fast-varying temporal fine structure (TFS) component (FFRTFS) and the slow-varying envelope component (FFRENV). When a single narrowband noise was presented alone, the FFRTFS, but not the FFRENV, was sensitive to ITDs. When two narrowband noises were presented simultaneously, the FFRTFS took advantage of the ITD disparity that was associated with perceived spatial separation between the two concurrent sounds, and displayed a better linear synchronization to the sound with an ipsilateral-leading ITD. However, no effects of ITDs were found on the FFRENV. These results suggest that the FFRTFS and FFRENV represent two distinct types of signal processing in the auditory brainstem and contribute differentially to sound segregation based on spatial cues: the FFRTFS is more critical to spatial release from masking.
Collapse
|
30
|
Skoe E, Burakiewicz E, Figueiredo M, Hardin M. Basic neural processing of sound in adults is influenced by bilingual experience. Neuroscience 2017; 349:278-290. [DOI: 10.1016/j.neuroscience.2017.02.049] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2016] [Revised: 02/18/2017] [Accepted: 02/21/2017] [Indexed: 11/30/2022]
|
31
|
Slugocki C, Bosnyak D, Trainor LJ. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity. Hear Res 2017; 345:30-42. [DOI: 10.1016/j.heares.2016.12.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 12/07/2016] [Accepted: 12/16/2016] [Indexed: 10/20/2022]
|
32
|
Xie Z, Reetzke R, Chandrasekaran B. Stability and plasticity in neural encoding of linguistically relevant pitch patterns. J Neurophysiol 2017; 117:1407-1422. [PMID: 28077662 DOI: 10.1152/jn.00445.2016] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Revised: 01/09/2017] [Accepted: 01/09/2017] [Indexed: 12/15/2022] Open
Abstract
While lifelong language experience modulates subcortical encoding of pitch patterns, there is emerging evidence that short-term training introduced in adulthood also shapes subcortical pitch encoding. Here we use a cross-language design to examine the stability of language experience-dependent subcortical plasticity over multiple days. We then examine the extent to which behavioral relevance induced by sound-to-category training leads to plastic changes in subcortical pitch encoding in adulthood relative to adolescence, a period of ongoing maturation of subcortical and cortical auditory processing. Frequency-following responses (FFRs), which reflect phase-locked activity from subcortical neural ensembles, were elicited while participants passively listened to pitch patterns reflective of Mandarin tones. In experiment 1, FFRs were recorded across three consecutive days from native Chinese-speaking (n = 10) and English-speaking (n = 10) adults. In experiment 2, FFRs were recorded from native English-speaking adolescents (n = 20) and adults (n = 15) before, during, and immediately after a session of sound-to-category training, as well as a day after training ceased. Experiment 1 demonstrated the stability of language experience-dependent subcortical plasticity in pitch encoding across multiple days of passive exposure to linguistic pitch patterns. In contrast, experiment 2 revealed an enhancement in subcortical pitch encoding that emerged a day after the sound-to-category training, with some developmental differences observed. Taken together, these findings suggest that behavioral relevance is a critical component for the observation of plasticity in the subcortical encoding of pitch.NEW & NOTEWORTHY We examine the timescale of experience-dependent auditory plasticity to linguistically relevant pitch patterns. We find extreme stability in lifelong experience-dependent plasticity. We further demonstrate that subcortical function in adolescents and adults is modulated by a single session of sound-to-category training. Our results suggest that behavioral relevance is a necessary ingredient for neural changes in pitch encoding to be observed throughout human development. These findings contribute to the neurophysiological understanding of long- and short-term experience-dependent modulation of pitch.
Collapse
Affiliation(s)
- Zilong Xie
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas
| | - Rachel Reetzke
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, Texas; .,Department of Psychology, The University of Texas at Austin, Austin, Texas.,Department of Linguistics, The University of Texas at Austin, Austin, Texas.,Institute for Neuroscience, The University of Texas at Austin, Austin, Texas; and.,Institute for Mental Health Research, The University of Texas at Austin, Austin, Texas
| |
Collapse
|
33
|
The Role of the Auditory Brainstem in Regularity Encoding and Deviance Detection. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
34
|
Skoe E, Brody L, Theodore RM. Reading ability reflects individual differences in auditory brainstem function, even into adulthood. BRAIN AND LANGUAGE 2017; 164:25-31. [PMID: 27694016 DOI: 10.1016/j.bandl.2016.09.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2016] [Revised: 08/22/2016] [Accepted: 09/03/2016] [Indexed: 06/06/2023]
Abstract
Research with developmental populations suggests that the maturational state of auditory brainstem encoding is linked to reading ability. Specifically, children with poor reading skills resemble biologically younger children with respect to their auditory brainstem responses (ABRs) to speech stimulation. Because ABR development continues into adolescence, it is possible that the link between ABRs and reading ability changes or resolves as the brainstem matures. To examine these possibilities, ABRs were recorded at varying presentation rates in adults with diverse, yet unimpaired reading levels. We found that reading ability in adulthood related to ABR Wave V latency, with more juvenile response morphology linked to less proficient reading ability, as has been observed for children. These data add to the evidence indicating that auditory brainstem responses serve as an index of the sound-based skills that underlie reading, even into adulthood.
Collapse
Affiliation(s)
- Erika Skoe
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, 850 Bolton Road, Unit 1085, Storrs, CT 06269, United States; Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT 06269, United States.
| | - Lisa Brody
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, 850 Bolton Road, Unit 1085, Storrs, CT 06269, United States.
| | - Rachel M Theodore
- Department of Speech, Language, and Hearing Sciences, University of Connecticut, 850 Bolton Road, Unit 1085, Storrs, CT 06269, United States; Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 337 Mansfield Road, Unit 1272, Storrs, CT 06269, United States.
| |
Collapse
|
35
|
Musicians' edge: A comparison of auditory processing, cognitive abilities and statistical learning. Hear Res 2016; 342:112-123. [DOI: 10.1016/j.heares.2016.10.008] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 10/11/2016] [Accepted: 10/15/2016] [Indexed: 11/19/2022]
|
36
|
Lau JCY, Wong PCM, Chandrasekaran B. Context-dependent plasticity in the subcortical encoding of linguistic pitch patterns. J Neurophysiol 2016; 117:594-603. [PMID: 27832606 DOI: 10.1152/jn.00656.2016] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 11/07/2016] [Indexed: 01/08/2023] Open
Abstract
We examined the mechanics of online experience-dependent auditory plasticity by assessing the influence of prior context on the frequency-following responses (FFRs), which reflect phase-locked responses from neural ensembles within the subcortical auditory system. FFRs were elicited to a Cantonese falling lexical pitch pattern from 24 native speakers of Cantonese in a variable context, wherein the falling pitch pattern randomly occurred in the context of two other linguistic pitch patterns; in a patterned context, wherein, the falling pitch pattern was presented in a predictable sequence along with two other pitch patterns, and in a repetitive context, wherein the falling pitch pattern was presented with 100% probability. We found that neural tracking of the stimulus pitch contour was most faithful and accurate when listening context was patterned and least faithful when the listening context was variable. The patterned context elicited more robust pitch tracking relative to the repetitive context, suggesting that context-dependent plasticity is most robust when the context is predictable but not repetitive. Our study demonstrates a robust influence of prior listening context that works to enhance online neural encoding of linguistic pitch patterns. We interpret these results as indicative of an interplay between contextual processes that are responsive to predictability as well as novelty in the presentation context. NEW & NOTEWORTHY Human auditory perception in dynamic listening environments requires fine-tuning of sensory signal based on behaviorally relevant regularities in listening context, i.e., online experience-dependent plasticity. Our finding suggests what partly underlie online experience-dependent plasticity are interplaying contextual processes in the subcortical auditory system that are responsive to predictability as well as novelty in listening context. These findings add to the literature that looks to establish the neurophysiological bases of auditory system plasticity, a central issue in auditory neuroscience.
Collapse
Affiliation(s)
- Joseph C Y Lau
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, Hong Kong.,Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, Hong Kong
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, Moody College of Communication, The University of Texas at Austin, Austin, Texas; .,Department of Psychology, College of Liberal Arts, The University of Texas at Austin, Austin, Texas.,Department of Linguistics, College of Liberal Arts, The University of Texas at Austin, Austin, Texas.,Institute of Mental Health Research, College of Liberal Arts, The University of Texas at Austin, Austin, Texas; and.,Institute for Neuroscience, The University of Texas at Austin, Austin, Texas
| |
Collapse
|
37
|
Longenecker RJ, Alghamdi F, Rosen MJ, Galazyuk AV. Prepulse inhibition of the acoustic startle reflex vs. auditory brainstem response for hearing assessment. Hear Res 2016; 339:80-93. [PMID: 27349914 DOI: 10.1016/j.heares.2016.06.006] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2016] [Revised: 05/18/2016] [Accepted: 06/13/2016] [Indexed: 02/08/2023]
Abstract
The high prevalence of noise-induced and age-related hearing loss in the general population has warranted the use of animal models to study the etiology of these pathologies. Quick and accurate auditory threshold determination is a prerequisite for experimental manipulations targeting hearing loss in animal models. The standard auditory brainstem response (ABR) measurement is fairly quick and translational across species, but is limited by the need for anesthesia and a lack of perceptual assessment. The goal of this study was to develop a new method of hearing assessment utilizing prepulse inhibition (PPI) of the acoustic startle reflex, a commonly used tool that measures detection thresholds in awake animals, and can be performed on multiple animals simultaneously. We found that in control mice PPI audiometric functions are similar to both ABR and traditional operant conditioning audiograms. The hearing thresholds assessed with PPI audiometry in sound exposed mice were also similar to those detected by ABR thresholds one day after exposure. However, three months after exposure PPI threshold shifts were still evident at and near the frequency of exposure whereas ABR thresholds recovered to the pre-exposed level. In contrast, PPI audiometry and ABR wave one amplitudes detected similar losses. PPI audiometry provides a high throughput automated behavioral screening tool of hearing in awake animals. Overall, PPI audiometry and ABR assessments of the auditory system are robust techniques with distinct advantages and limitations, which when combined, can provide ample information about the functionality of the auditory system.
Collapse
Affiliation(s)
- R J Longenecker
- Northeast Ohio Medical University, Department of Anatomy and Neurobiology, Rootstown, OH, USA.
| | - F Alghamdi
- Northeast Ohio Medical University, Department of Anatomy and Neurobiology, Rootstown, OH, USA
| | - M J Rosen
- Northeast Ohio Medical University, Department of Anatomy and Neurobiology, Rootstown, OH, USA
| | - A V Galazyuk
- Northeast Ohio Medical University, Department of Anatomy and Neurobiology, Rootstown, OH, USA
| |
Collapse
|
38
|
Rodríguez-Aranda C, Waterloo K, Johnsen SH, Eldevik P, Sparr S, Wikran GC, Herder M, Vangberg TR. Neuroanatomical correlates of verbal fluency in early Alzheimer's disease and normal aging. BRAIN AND LANGUAGE 2016; 155-156:24-35. [PMID: 27062691 DOI: 10.1016/j.bandl.2016.03.001] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/04/2015] [Revised: 01/22/2016] [Accepted: 03/12/2016] [Indexed: 06/05/2023]
Abstract
Verbal fluency (VF) impairments occur early in Alzheimer's disease (AD) and to a lesser extent also in normal aging. However, the neural underpinnings of these impairments are not fully understood. The present study evaluated whether VF impairments in early AD and normal aging rely upon common or different neuroanatomical correlates. We examined the association between VF performance and brain structure in 18 mild AD patients and 24 healthy elderly. Linear regressions were performed between accuracy and time intervals in VF scores and structural measurements of cerebral gray matter (GM) and white matter (WM) using MRI. Results showed that semantic VF correlated exclusively with GM in cerebellum, left temporal fusiform cortex, and WM in uncinate fasciculus, inferior fronto-occipital fasciculus and corpus callosum. Phonemic VF showed unique associations between intervals and WM in left-hemisphere tracts. The association between GM in hippocampus, subcortical structures and semantic accuracy differentiated patients from controls. Results showed that VF impairments are primarily associated with same structural brain changes in AD as in healthy elderly but at exaggerated levels. However, specific VF deficiencies and their underlying neural correlates exist and these clearly differentiate the initial stages of AD.
Collapse
Affiliation(s)
| | - Knut Waterloo
- Department of Psychology, UiT The Artic University of Norway, Tromsø, Norway; Department of Neurology, University Hospital North Norway, Tromsø, Norway
| | - Stein Harald Johnsen
- Department of Neurology, University Hospital North Norway, Tromsø, Norway; Brain and Circulation Research Group, Department of Clinical Medicine, UiT The Artic University of Norway, Tromsø, Norway
| | - Petter Eldevik
- Department of Radiology, University Hospital North Norway, Tromsø, Norway
| | - Sigurd Sparr
- Department of Geriatrics, University Hospital North Norway, Tromsø, Norway
| | - Gry C Wikran
- Department of Radiology, University Hospital North Norway, Tromsø, Norway
| | - Marit Herder
- Department of Radiology, University Hospital North Norway, Tromsø, Norway
| | - Torgil Riise Vangberg
- Department of Radiology, University Hospital North Norway, Tromsø, Norway; Medical Imaging Research Group, Department of Clinical Medicine, UiT The Artic University of Norway, Tromsø, Norway
| |
Collapse
|
39
|
Coffey EBJ, Colagrosso EMG, Lehmann A, Schönwiesner M, Zatorre RJ. Individual Differences in the Frequency-Following Response: Relation to Pitch Perception. PLoS One 2016; 11:e0152374. [PMID: 27015271 PMCID: PMC4807774 DOI: 10.1371/journal.pone.0152374] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2015] [Accepted: 03/14/2016] [Indexed: 11/30/2022] Open
Abstract
The scalp-recorded frequency-following response (FFR) is a measure of the auditory nervous system’s representation of periodic sound, and may serve as a marker of training-related enhancements, behavioural deficits, and clinical conditions. However, FFRs of healthy normal subjects show considerable variability that remains unexplained. We investigated whether the FFR representation of the frequency content of a complex tone is related to the perception of the pitch of the fundamental frequency. The strength of the fundamental frequency in the FFR of 39 people with normal hearing was assessed when they listened to complex tones that either included or lacked energy at the fundamental frequency. We found that the strength of the fundamental representation of the missing fundamental tone complex correlated significantly with people's general tendency to perceive the pitch of the tone as either matching the frequency of the spectral components that were present, or that of the missing fundamental. Although at a group level the fundamental representation in the FFR did not appear to be affected by the presence or absence of energy at the same frequency in the stimulus, the two conditions were statistically distinguishable for some subjects individually, indicating that the neural representation is not linearly dependent on the stimulus content. In a second experiment using a within-subjects paradigm, we showed that subjects can learn to reversibly select between either fundamental or spectral perception, and that this is accompanied both by changes to the fundamental representation in the FFR and to cortical-based gamma activity. These results suggest that both fundamental and spectral representations coexist, and are available for later auditory processing stages, the requirements of which may also influence their relative strength and thus modulate FFR variability. The data also highlight voluntary mode perception as a new paradigm with which to study top-down vs bottom-up mechanisms that support the emerging view of the FFR as the outcome of integrated processing in the entire auditory system.
Collapse
Affiliation(s)
- Emily B. J. Coffey
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- * E-mail:
| | | | - Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, Canada
| | - Marc Schönwiesner
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
| | - Robert J. Zatorre
- Montreal Neurological Institute, McGill University, Montreal, Canada
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| |
Collapse
|
40
|
Cortical contributions to the auditory frequency-following response revealed by MEG. Nat Commun 2016; 7:11070. [PMID: 27009409 PMCID: PMC4820836 DOI: 10.1038/ncomms11070] [Citation(s) in RCA: 248] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2015] [Accepted: 02/17/2016] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) to complex periodic sounds is used to study the subcortical auditory system, and has been proposed as a biomarker for disorders that feature abnormal sound processing. Despite its value in fundamental and clinical research, the neural origins of the FFR are unclear. Using magnetoencephalography, we observe a strong, right-asymmetric contribution to the FFR from the human auditory cortex at the fundamental frequency of the stimulus, in addition to signal from cochlear nucleus, inferior colliculus and medial geniculate. This finding is highly relevant for our understanding of plasticity and pathology in the auditory system, as well as higher-level cognition such as speech and music processing. It suggests that previous interpretations of the FFR may need re-examination using methods that allow for source separation. Auditory brainstem response (ABR) is used to study temporal encoding of auditory information in music and language. This study utilizes magnetoencephalography to localize both cortical and subcortical origins of the sustained frequency following response (FFR), the ABR component that encodes the periodicity of sound.
Collapse
|
41
|
Abstract
Every day we communicate using complex linguistic and musical systems, yet these modern systems are the product of a much more ancient relationship with sound. When we speak, we communicate not only with the words we choose, but also with the patterns of sound we create and the movements that create them. From the natural rhythms of speech, to the precise timing characteristics of a consonant, these patterns guide our daily communication. By examining the principles of information processing that are common to speech and music, we peel back the layers to reveal the biological foundations of human communication through sound. Further, we consider how the brain's response to sound is shaped by experience, such as musical expertise, and implications for the treatment of communication disorders.
Collapse
Affiliation(s)
- Nina Kraus
- Auditory Neuroscience Laboratory, Departments of
- Communication Sciences,
- Neurobiology and Physiology,
- Otolaryngology, Northwestern University, Evanston, Illinois 60208;
| | - Jessica Slater
- Auditory Neuroscience Laboratory, Departments of
- Communication Sciences,
| |
Collapse
|
42
|
Auditory Processing Disorder: Biological Basis and Treatment Efficacy. TRANSLATIONAL RESEARCH IN AUDIOLOGY, NEUROTOLOGY, AND THE HEARING SCIENCES 2016. [DOI: 10.1007/978-3-319-40848-4_3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
43
|
Skoe E, Krizman J, Spitzer E, Kraus N. Prior experience biases subcortical sensitivity to sound patterns. J Cogn Neurosci 2015; 27:124-40. [PMID: 25061926 DOI: 10.1162/jocn_a_00691] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
To make sense of our ever-changing world, our brains search out patterns. This drive can be so strong that the brain imposes patterns when there are none. The opposite can also occur: The brain can overlook patterns because they do not conform to expectations. In this study, we examined this neural sensitivity to patterns within the auditory brainstem, an evolutionarily ancient part of the brain that can be fine-tuned by experience and is integral to an array of cognitive functions. We have recently shown that this auditory hub is sensitive to patterns embedded within a novel sound stream, and we established a link between neural sensitivity and behavioral indices of learning [Skoe, E., Krizman, J., Spitzer, E., & Kraus, N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience, 243, 104-114, 2013]. We now ask whether this sensitivity to stimulus statistics is biased by prior experience and the expectations arising from this experience. To address this question, we recorded complex auditory brainstem responses (cABRs) to two patterned sound sequences formed from a set of eight repeating tones. For both patterned sequences, the eight tones were presented such that the transitional probability (TP) between neighboring tones was either 33% (low predictability) or 100% (high predictability). Although both sequences were novel to the healthy young adult listener and had similar TP distributions, one was perceived to be more musical than the other. For the more musical sequence, participants performed above chance when tested on their recognition of the most predictable two-tone combinations within the sequence (TP of 100%); in this case, the cABR differed from a baseline condition where the sound sequence had no predictable structure. In contrast, for the less musical sequence, learning was at chance, suggesting that listeners were "deaf" to the highly predictable repeating two-tone combinations in the sequence. For this condition, the cABR also did not differ from baseline. From this, we posit that the brainstem acts as a Bayesian sound processor, such that it factors in prior knowledge about the environment to index the probability of particular events within ever-changing sensory conditions.
Collapse
|
44
|
Malmierca MS, Anderson LA, Antunes FM. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding. Front Syst Neurosci 2015; 9:19. [PMID: 25805974 PMCID: PMC4353371 DOI: 10.3389/fnsys.2015.00019] [Citation(s) in RCA: 81] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Accepted: 02/03/2015] [Indexed: 02/02/2023] Open
Abstract
To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex.
Collapse
Affiliation(s)
- Manuel S Malmierca
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain ; Faculty of Medicine, Department of Cell Biology and Pathology, University of Salamanca Salamanca, Spain
| | - Lucy A Anderson
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain
| | - Flora M Antunes
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain
| |
Collapse
|
45
|
Kraus N, Slater J. Music and language. THE HUMAN AUDITORY SYSTEM - FUNDAMENTAL ORGANIZATION AND CLINICAL DISORDERS 2015; 129:207-22. [DOI: 10.1016/b978-0-444-62630-1.00012-3] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
|
46
|
Kraus N, Slater J, Thompson EC, Hornickel J, Strait DL, Nicol T, White-Schwoch T. Auditory learning through active engagement with sound: biological impact of community music lessons in at-risk children. Front Neurosci 2014; 8:351. [PMID: 25414631 PMCID: PMC4220673 DOI: 10.3389/fnins.2014.00351] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2014] [Accepted: 10/14/2014] [Indexed: 01/22/2023] Open
Abstract
The young nervous system is primed for sensory learning, facilitating the acquisition of language and communication skills. Social and linguistic impoverishment can limit these learning opportunities, eventually leading to language-related challenges such as poor reading. Music training offers a promising auditory learning strategy by directing attention to meaningful acoustic elements of the soundscape. In light of evidence that music training improves auditory skills and their neural substrates, there are increasing efforts to enact community-based programs to provide music instruction to at-risk children. Harmony Project is a community foundation that has provided free music instruction to over 1000 children from Los Angeles gang-reduction zones over the past decade. We conducted an independent evaluation of biological effects of participating in Harmony Project by following a cohort of children for 1 year. Here we focus on a comparison between students who actively engaged with sound through instrumental music training vs. students who took music appreciation classes. All children began with an introductory music appreciation class, but midway through the year half of the children transitioned to the instrumental training. After the year of training, the children who actively engaged with sound through instrumental music training had faster and more robust neural processing of speech than the children who stayed in the music appreciation class, observed in neural responses to a speech sound /d/. The neurophysiological measures found to be enhanced in the instrumentally-trained children have been previously linked to reading ability, suggesting a gain in neural processes important for literacy stemming from active auditory learning. Despite intrinsic constraints on our study imposed by a community setting, these findings speak to the potential of active engagement with sound (i.e., music-making) to engender experience-dependent neuroplasticity and may inform the development of strategies for auditory learning.
Collapse
Affiliation(s)
- Nina Kraus
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Department of Communication Sciences, Northwestern UniversityEvanston, IL, USA
- Neuroscience Program, Northwestern UniversityEvanston, IL, USA
- Department of Neurobiology and Physiology, Northwestern UniversityEvanston, IL, USA
- Department of Otolaryngology, Northwestern UniversityChicago, IL, USA
| | - Jessica Slater
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Department of Communication Sciences, Northwestern UniversityEvanston, IL, USA
| | - Elaine C. Thompson
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Department of Communication Sciences, Northwestern UniversityEvanston, IL, USA
| | - Jane Hornickel
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Data Sense LLCChicago, IL, USA
| | - Dana L. Strait
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Neuroscience Program, Northwestern UniversityEvanston, IL, USA
| | - Trent Nicol
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Department of Communication Sciences, Northwestern UniversityEvanston, IL, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, www.brainvolts.northwestern.edu, Northwestern UniversityEvanston, IL, USA
- Department of Communication Sciences, Northwestern UniversityEvanston, IL, USA
| |
Collapse
|
47
|
Banai K, Lavner Y. The effects of training length on the perceptual learning of time-compressed speech and its generalization. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1908-1917. [PMID: 25324090 DOI: 10.1121/1.4895684] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Brief exposure to time-compressed speech yields both learning and generalization. Whether such learning continues over the course of multi-session training and if so whether it is more or less specific than exposure-induced learning is not clear, because the outcomes of intensive practice with time-compressed speech have rarely been reported. The goal here was to determine whether prolonged training on time-compressed speech yields additional learning and generalization beyond that induced by brief exposure. Listeners practiced the semantic verification of time-compressed sentences for one or three training sessions. Identification of trained and untrained tokens was subsequently compared between listeners who trained for one or three sessions, listeners who were briefly exposed to 20 time-compressed sentences and naive listeners. Trained listeners outperformed the other groups of listeners on the trained condition, but only the group that was trained for three sessions outperformed the other groups when tested with untrained tokens. These findings suggest that although learning of distorted speech can occur rapidly, more stable learning and generalization might be achieved with longer, multi-session practice. It is suggested that the findings are consistent with the framework proposed by the Reverse Hierarchy Theory of perceptual learning.
Collapse
Affiliation(s)
- Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Yizhar Lavner
- Department of Computer Science, Tel-Hai College, Tel-Hai, Israel
| |
Collapse
|
48
|
Duque D, Malmierca MS. Stimulus-specific adaptation in the inferior colliculus of the mouse: anesthesia and spontaneous activity effects. Brain Struct Funct 2014; 220:3385-98. [PMID: 25115620 DOI: 10.1007/s00429-014-0862-1] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2014] [Accepted: 07/29/2014] [Indexed: 12/19/2022]
Abstract
Rapid behavioral responses to unexpected events in the acoustic environment are critical for survival. Stimulus-specific adaptation (SSA) is the process whereby some auditory neurons respond better to rare stimuli than to repetitive stimuli. Most experiments on SSA have been performed under anesthesia, and it is unknown if SSA sensitivity is altered by the anesthetic agent. Only a direct comparison can answer this question. Here, we recorded extracellular single units in the inferior colliculus of awake and anesthetized mice under an oddball paradigm that elicits SSA. Our results demonstrate that SSA is similar, but not identical, in the awake and anesthetized preparations. The differences are mostly due to the higher spontaneous activity observed in the awake animals, which also revealed a high incidence of inhibitory receptive fields. We conclude that SSA is not an artifact of anesthesia and that spontaneous activity modulates neuronal SSA differentially, depending on the state of arousal. Our results suggest that SSA may be especially important when nervous system activity is suppressed during sleep-like states. This may be a useful survival mechanism that allows the organism to respond to danger when sleeping.
Collapse
Affiliation(s)
- Daniel Duque
- Auditory Neurophysiology Unit, Laboratory for the Neurobiology of Hearing, Institute of Neuroscience of Castilla Y León, University of Salamanca, C/Pintor Fernando Gallego, 1, 37007, Salamanca, Spain
| | - Manuel S Malmierca
- Auditory Neurophysiology Unit, Laboratory for the Neurobiology of Hearing, Institute of Neuroscience of Castilla Y León, University of Salamanca, C/Pintor Fernando Gallego, 1, 37007, Salamanca, Spain.
- Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Campus Miguel de Unamuno, 37007, Salamanca, Spain.
| |
Collapse
|
49
|
François C, Jaillet F, Takerkart S, Schön D. Faster sound stream segmentation in musicians than in nonmusicians. PLoS One 2014; 9:e101340. [PMID: 25014068 PMCID: PMC4094420 DOI: 10.1371/journal.pone.0101340] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2013] [Accepted: 06/05/2014] [Indexed: 12/24/2022] Open
Abstract
The musician's brain is considered as a good model of brain plasticity as musical training is known to modify auditory perception and related cortical organization. Here, we show that music-related modifications can also extend beyond motor and auditory processing and generalize (transfer) to speech processing. Previous studies have shown that adults and newborns can segment a continuous stream of linguistic and non-linguistic stimuli based only on probabilities of occurrence between adjacent syllables, tones or timbres. The paradigm classically used in these studies consists of a passive exposure phase followed by a testing phase. By using both behavioural and electrophysiological measures, we recently showed that adult musicians and musically trained children outperform nonmusicians in the test following brief exposure to an artificial sung language. However, the behavioural test does not allow for studying the learning process per se but rather the result of the learning. In the present study, we analyze the electrophysiological learning curves that are the ongoing brain dynamics recorded as the learning is taking place. While musicians show an inverted U shaped learning curve, nonmusicians show a linear learning curve. Analyses of Event-Related Potentials (ERPs) allow for a greater understanding of how and when musical training can improve speech segmentation. These results bring evidence of enhanced neural sensitivity to statistical regularities in musicians and support the hypothesis of positive transfer of training effect from music to sound stream segmentation in general.
Collapse
Affiliation(s)
- Clément François
- Cognition and Brain Plasticity Unit, Institute of Biomedicine Research of Bellvitge, Barcelona, Spain
- Department of Basic Psychology, University of Barcelona, Barcelona, Spain
| | - Florent Jaillet
- Institut de Neurosciences de la Timone, Unité Mixte de Recherche 7289, Aix-Marseille Université, Centre National de la Recherche Scientifique, Marseille, France
| | - Sylvain Takerkart
- Institut de Neurosciences de la Timone, Unité Mixte de Recherche 7289, Aix-Marseille Université, Centre National de la Recherche Scientifique, Marseille, France
| | - Daniele Schön
- Institut de Neurosciences des Systèmes Unité 1106, Aix-Marseille Université, Institut National de la Santé Et de la Recherche Médicale, Marseille, France
- * E-mail:
| |
Collapse
|
50
|
Malmierca MS, Sanchez-Vives MV, Escera C, Bendixen A. Neuronal adaptation, novelty detection and regularity encoding in audition. Front Syst Neurosci 2014; 8:111. [PMID: 25009474 PMCID: PMC4068197 DOI: 10.3389/fnsys.2014.00111] [Citation(s) in RCA: 70] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Accepted: 05/24/2014] [Indexed: 11/19/2022] Open
Abstract
The ability to detect unexpected stimuli in the acoustic environment and determine their behavioral relevance to plan an appropriate reaction is critical for survival. This perspective article brings together several viewpoints and discusses current advances in understanding the mechanisms the auditory system implements to extract relevant information from incoming inputs and to identify unexpected events. This extraordinary sensitivity relies on the capacity to codify acoustic regularities, and is based on encoding properties that are present as early as the auditory midbrain. We review state-of-the-art studies on the processing of stimulus changes using non-invasive methods to record the summed electrical potentials in humans, and those that examine single-neuron responses in animal models. Human data will be based on mismatch negativity (MMN) and enhanced middle latency responses (MLR). Animal data will be based on the activity of single neurons at the cortical and subcortical levels, relating selective responses to novel stimuli to the MMN and to stimulus-specific neural adaptation (SSA). Theoretical models of the neural mechanisms that could create SSA and novelty responses will also be discussed.
Collapse
Affiliation(s)
- Manuel S Malmierca
- Auditory Neurophysiology Unit, Laboratory for the Neurobiology of Hearing, Institute of Neuroscience of Castilla y León, University of Salamanca Salamanca, Spain ; Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca Salamanca, Spain
| | - Maria V Sanchez-Vives
- Institució Catalana de Recerca i Estudis Avançats (ICREA) Barcelona, Spain ; Institut de Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS) Barcelona, Spain
| | - Carles Escera
- Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology, University of Barcelona Barcelona, Spain ; Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg Oldenburg, Germany
| | - Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg Oldenburg, Germany
| |
Collapse
|