1
|
Tichko P, Page N, Kim JC, Large EW, Loui P. Neural Entrainment to Musical Pulse in Naturalistic Music Is Preserved in Aging: Implications for Music-Based Interventions. Brain Sci 2022; 12:brainsci12121676. [PMID: 36552136 PMCID: PMC9775503 DOI: 10.3390/brainsci12121676] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2022] [Revised: 11/21/2022] [Accepted: 12/01/2022] [Indexed: 12/12/2022] Open
Abstract
Neural entrainment to musical rhythm is thought to underlie the perception and production of music. In aging populations, the strength of neural entrainment to rhythm has been found to be attenuated, particularly during attentive listening to auditory streams. However, previous studies on neural entrainment to rhythm and aging have often employed artificial auditory rhythms or limited pieces of recorded, naturalistic music, failing to account for the diversity of rhythmic structures found in natural music. As part of larger project assessing a novel music-based intervention for healthy aging, we investigated neural entrainment to musical rhythms in the electroencephalogram (EEG) while participants listened to self-selected musical recordings across a sample of younger and older adults. We specifically measured neural entrainment to the level of musical pulse-quantified here as the phase-locking value (PLV)-after normalizing the PLVs to each musical recording's detected pulse frequency. As predicted, we observed strong neural phase-locking to musical pulse, and to the sub-harmonic and harmonic levels of musical meter. Overall, PLVs were not significantly different between older and younger adults. This preserved neural entrainment to musical pulse and rhythm could support the design of music-based interventions that aim to modulate endogenous brain activity via self-selected music for healthy cognitive aging.
Collapse
Affiliation(s)
- Parker Tichko
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Nicole Page
- Department of Music, Northeastern University, Boston, MA 02115, USA
| | - Ji Chul Kim
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Edward W. Large
- Department of Psychological Sciences, University of Connecticut, Storrs, CT 06269, USA
| | - Psyche Loui
- Department of Music, Northeastern University, Boston, MA 02115, USA
- Correspondence:
| |
Collapse
|
2
|
Cheng FY, Xu C, Gold L, Smith S. Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech. Front Neurosci 2022; 15:747303. [PMID: 34987356 PMCID: PMC8721138 DOI: 10.3389/fnins.2021.747303] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 12/02/2021] [Indexed: 01/15/2023] Open
Abstract
The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWSbefore and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.
Collapse
Affiliation(s)
- Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Lisa Gold
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
3
|
Alemi R, Nozaradan S, Lehmann A. Free-Field Cortical Steady-State Evoked Potentials in Cochlear Implant Users. Brain Topogr 2021; 34:664-680. [PMID: 34185222 DOI: 10.1007/s10548-021-00860-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2021] [Accepted: 06/18/2021] [Indexed: 11/25/2022]
Abstract
Auditory steady-state evoked potentials (SS-EPs) are phase-locked neural responses to periodic stimuli, believed to reflect specific neural generators. As an objective measure, steady-state responses have been used in different clinical settings, including measuring hearing thresholds of normal and hearing-impaired subjects. Recent studies are in favor of recording these responses as a part of the cochlear implant (CI) device-fitting procedure. Considering these potential benefits, the goals of the present study were to assess the feasibility of recording free-field SS-EPs in CI users and to compare their characteristics between CI users and controls. By taking advantage of a recently developed dual-frequency tagging method, we attempted to record subcortical and cortical SS-EPs from adult CI users and controls and measured reliable subcortical and cortical SS-EPs in the control group. Independent component analysis (ICA) was used to remove CI stimulation artifacts, yet subcortical responses of several CIs were heavily contaminated by these artifacts. Consequently, only cortical SS-EPs were compared between groups, which were found to be larger in the controls. The lower cortical SS-EPs' amplitude in CI users might indicate a reduction in neural synchrony evoked by the modulation rate of the auditory input across different neural assemblies in the auditory pathway. The brain topographies of cortical auditory SS-EPs, the time course of cortical responses, and the reconstructed cortical maps were highly similar between groups, confirming their neural origin and possibility to obtain such responses also in CI recipients. As for subcortical SS-EPs, our results highlight a need for sophisticated denoising algorithms to pinpoint and remove artifactual components from the biological response.
Collapse
Affiliation(s)
- Razieh Alemi
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada.
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada.
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada.
| | - Sylvie Nozaradan
- Institute of Neuroscience (IONS), Université Catholique de Louvain (UCL), Ottignies-Louvain-la-Neuve, Belgium
| | - Alexandre Lehmann
- Faculty of Medicine, Department of Otolaryngology, McGill University, Montreal, QC, Canada
- Centre for Research On Brain, Language & Music (CRBLM), Montreal, Canada
- International Laboratory for Brain, Music & Sound Research (BRAMS), Montreal, QC, Canada
| |
Collapse
|
4
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
5
|
Neural generators of the frequency-following response elicited to stimuli of low and high frequency: A magnetoencephalographic (MEG) study. Neuroimage 2021; 231:117866. [PMID: 33592244 DOI: 10.1016/j.neuroimage.2021.117866] [Citation(s) in RCA: 35] [Impact Index Per Article: 11.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/03/2023] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ascending auditory system. Seminal studies suggested the FFR as a correlate of subcortical sound encoding, yet recent studies aiming to locate its sources challenged this assumption, demonstrating that FFR receives some contribution from the auditory cortex. Based on frequency-specific phase-locking capabilities along the auditory hierarchy, we hypothesized that FFRs to higher frequencies would receive less cortical contribution than those to lower frequencies, hence supporting a major subcortical involvement for these high frequency sounds. Here, we used a magnetoencephalographic (MEG) approach to trace the neural sources of the FFR elicited in healthy adults (N = 19) to low (89 Hz) and high (333 Hz) frequency sounds. FFRs elicited to the high and low frequency sounds were clearly observable on MEG and comparable to those obtained in simultaneous electroencephalographic recordings. Distributed source modeling analyses revealed midbrain, thalamic, and cortical contributions to FFR, arranged in frequency-specific configurations. Our results showed that the main contribution to the high-frequency sound FFR originated in the inferior colliculus and the medial geniculate body of the thalamus, with no significant cortical contribution. In contrast, the low-frequency sound FFR had a major contribution located in the auditory cortices, and also received contributions originating in the midbrain and thalamic structures. These findings support the multiple generator hypothesis of the FFR and are relevant for our understanding of the neural encoding of sounds along the auditory hierarchy, suggesting a hierarchical organization of periodicity encoding.
Collapse
|
6
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
7
|
Combination of absolute pitch and tone language experience enhances lexical tone perception. Sci Rep 2021; 11:1485. [PMID: 33452284 PMCID: PMC7811026 DOI: 10.1038/s41598-020-80260-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 12/18/2020] [Indexed: 01/29/2023] Open
Abstract
Absolute pitch (AP), a unique ability to name or produce pitch without any reference, is known to be influenced by genetic and cultural factors. AP and tone language experience are both known to promote lexical tone perception. However, the effects of the combination of AP and tone language experience on lexical tone perception are currently not known. In the current study, using behavioral (Categorical Perception) and electrophysiological (Frequency Following Response) measures, we investigated the effect of the combination of AP and tone language experience on lexical tone perception. We found that the Cantonese speakers with AP outperformed the Cantonese speakers without AP on Categorical Perception and Frequency Following Responses of lexical tones, suggesting an additive effect due to the combination of AP and tone language experience. These findings suggest a role of basic sensory pre-attentive auditory processes towards pitch encoding in AP. Further, these findings imply a common mechanism underlying pitch encoding in AP and tone language perception.
Collapse
|
8
|
Meter enhances the subcortical processing of speech sounds at a strong beat. Sci Rep 2020; 10:15973. [PMID: 32994430 PMCID: PMC7525485 DOI: 10.1038/s41598-020-72714-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 09/07/2020] [Indexed: 11/08/2022] Open
Abstract
The temporal structure of sound such as in music and speech increases the efficiency of auditory processing by providing listeners with a predictable context. Musical meter is a good example of a sound structure that is temporally organized in a hierarchical manner, with recent studies showing that meter optimizes neural processing, particularly for sounds located at a higher metrical position or strong beat. Whereas enhanced cortical auditory processing at times of high metric strength has been studied, there is to date no direct evidence showing metrical modulation of subcortical processing. In this work, we examined the effect of meter on the subcortical encoding of sounds by measuring human auditory frequency-following responses to speech presented at four different metrical positions. Results show that neural encoding of the fundamental frequency of the vowel was enhanced at the strong beat, and also that the neural consistency of the vowel was the highest at the strong beat. When comparing musicians to non-musicians, musicians were found, at the strong beat, to selectively enhance the behaviorally relevant component of the speech sound, namely the formant frequency of the transient part. Our findings indicate that the meter of sound influences subcortical processing, and this metrical modulation differs depending on musical expertise.
Collapse
|
9
|
Todd J, Frost JD, Yeark M, Paton B. Context is everything: How context shapes modulations of responses to unattended sound. Hear Res 2020; 399:107975. [PMID: 32370880 DOI: 10.1016/j.heares.2020.107975] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/13/2019] [Revised: 04/13/2020] [Accepted: 04/14/2020] [Indexed: 10/24/2022]
Abstract
The concept of perceptual inferences taking place over multiple timescales simultaneously raises questions about how the brain can balance the demands of remaining sensitive to local rarity while utilising more global longer-term predictability to modulate cortical responses. In the present study auditory evoked potentials to four presentations of the same sound sequence containing predictable structure on a local (milliseconds to seconds) and more global (many minutes) timescales were recorded. The results from 33 participants are used to demonstrate that predictions about both local (internal predictive models) and global (meta-models that define expected precisions associated with familiar internal model states) regularities are formed. The study exposes more local context-based modulations of the P1 but more global order-based modulations of the auditory evoked N2 components. The results are discussed in terms of theoretical links advocating that uncertainty at multiple timescales could lead to differential component modulations, and the importance of considering the broader learning context in auditory evoked potential studies.
Collapse
Affiliation(s)
- Juanita Todd
- School of Psychology, University of Newcastle, University Drive, Callaghan, NSW, Australia, 2308.
| | - Jade D Frost
- School of Psychology, University of Newcastle, University Drive, Callaghan, NSW, Australia, 2308
| | - Mattsen Yeark
- School of Psychology, University of Newcastle, University Drive, Callaghan, NSW, Australia, 2308
| | - Bryan Paton
- School of Psychology, University of Newcastle, University Drive, Callaghan, NSW, Australia, 2308
| |
Collapse
|
10
|
Krizman J, Kraus N. Analyzing the FFR: A tutorial for decoding the richness of auditory function. Hear Res 2019; 382:107779. [PMID: 31505395 PMCID: PMC6778514 DOI: 10.1016/j.heares.2019.107779] [Citation(s) in RCA: 81] [Impact Index Per Article: 16.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/06/2019] [Revised: 08/01/2019] [Accepted: 08/06/2019] [Indexed: 01/12/2023]
Abstract
The frequency-following response, or FFR, is a neurophysiological response to sound that precisely reflects the ongoing dynamics of sound. It can be used to study the integrity and malleability of neural encoding of sound across the lifespan. Sound processing in the brain can be impaired with pathology and enhanced through expertise. The FFR can index linguistic deprivation, autism, concussion, and reading impairment, and can reflect the impact of enrichment with short-term training, bilingualism, and musicianship. Because of this vast potential, interest in the FFR has grown considerably in the decade since our first tutorial. Despite its widespread adoption, there remains a gap in the current knowledge of its analytical potential. This tutorial aims to bridge this gap. Using recording methods we have employed for the last 20 + years, we have explored many analysis strategies. In this tutorial, we review what we have learned and what we think constitutes the most effective ways of capturing what the FFR can tell us. The tutorial covers FFR components (timing, fundamental frequency, harmonics) and factors that influence FFR (stimulus polarity, response averaging, and stimulus presentation/recording jitter). The spotlight is on FFR analyses, including ways to analyze FFR timing (peaks, autocorrelation, phase consistency, cross-phaseogram), magnitude (RMS, SNR, FFT), and fidelity (stimulus-response correlations, response-to-response correlations and response consistency). The wealth of information contained within an FFR recording brings us closer to understanding how the brain reconstructs our sonic world.
Collapse
Affiliation(s)
- Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA. https://www.brainvolts.northwestern.edu
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, 60208, USA; Department of Neurobiology, Northwestern University, Evanston, IL, 60208, USA.
| |
Collapse
|
11
|
Bidelman GM, Price CN, Shen D, Arnott SR, Alain C. Afferent-efferent connectivity between auditory brainstem and cortex accounts for poorer speech-in-noise comprehension in older adults. Hear Res 2019; 382:107795. [PMID: 31479953 DOI: 10.1016/j.heares.2019.107795] [Citation(s) in RCA: 30] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/08/2019] [Revised: 08/14/2019] [Accepted: 08/22/2019] [Indexed: 12/19/2022]
Abstract
Speech-in-noise (SIN) comprehension deficits in older adults have been linked to changes in both subcortical and cortical auditory evoked responses. However, older adults' difficulty understanding SIN may also be related to an imbalance in signal transmission (i.e., functional connectivity) between brainstem and auditory cortices. By modeling high-density scalp recordings of speech-evoked responses with sources in brainstem (BS) and bilateral primary auditory cortices (PAC), we show that beyond attenuating neural activity, hearing loss in older adults compromises the transmission of speech information between subcortical and early cortical hubs of the speech network. We found that the strength of afferent BS→PAC neural signaling (but not the reverse efferent flow; PAC→BS) varied with mild declines in hearing acuity and this "bottom-up" functional connectivity robustly predicted older adults' performance in a SIN identification task. Connectivity was also a better predictor of SIN processing than unitary subcortical or cortical responses alone. Our neuroimaging findings suggest that in older adults (i) mild hearing loss differentially reduces neural output at several stages of auditory processing (PAC > BS), (ii) subcortical-cortical connectivity is more sensitive to peripheral hearing loss than top-down (cortical-subcortical) control, and (iii) reduced functional connectivity in afferent auditory pathways plays a significant role in SIN comprehension problems.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| | - Caitlin N Price
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Dawei Shen
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Stephen R Arnott
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada; University of Toronto, Department of Psychology, Toronto, Ontario, Canada; University of Toronto, Institute of Medical Sciences, Toronto, Ontario, Canada
| |
Collapse
|
12
|
Carbajal GV, Malmierca MS. The Neuronal Basis of Predictive Coding Along the Auditory Pathway: From the Subcortical Roots to Cortical Deviance Detection. Trends Hear 2019; 22:2331216518784822. [PMID: 30022729 PMCID: PMC6053868 DOI: 10.1177/2331216518784822] [Citation(s) in RCA: 77] [Impact Index Per Article: 15.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
In this review, we attempt to integrate the empirical evidence regarding stimulus-specific adaptation (SSA) and mismatch negativity (MMN) under a predictive coding perspective (also known as Bayesian or hierarchical-inference model). We propose a renewed methodology for SSA study, which enables a further decomposition of deviance detection into repetition suppression and prediction error, thanks to the use of two controls previously introduced in MMN research: the many-standards and the cascade sequences. Focusing on data obtained with cellular recordings, we explain how deviance detection and prediction error are generated throughout hierarchical levels of processing, following two vectors of increasing computational complexity and abstraction along the auditory neuraxis: from subcortical toward cortical stations and from lemniscal toward nonlemniscal divisions. Then, we delve into the particular characteristics and contributions of subcortical and cortical structures to this generative mechanism of hierarchical inference, analyzing what is known about the role of neuromodulation and local microcircuitry in the emergence of mismatch signals. Finally, we describe how SSA and MMN are occurring at similar time frame and cortical locations, and both are affected by the manipulation of N-methyl- D-aspartate receptors. We conclude that there is enough empirical evidence to consider SSA and MMN, respectively, as the microscopic and macroscopic manifestations of the same physiological mechanism of deviance detection in the auditory cortex. Hence, the development of a common theoretical framework for SSA and MMN is all the more recommendable for future studies. In this regard, we suggest a shared nomenclature based on the predictive coding interpretation of deviance detection.
Collapse
Affiliation(s)
- Guillermo V Carbajal
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain
| | - Manuel S Malmierca
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain.,3 Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Spain
| |
Collapse
|
13
|
Holt LL, Tierney AT, Guerra G, Laffere A, Dick F. Dimension-selective attention as a possible driver of dynamic, context-dependent re-weighting in speech processing. Hear Res 2018; 366:50-64. [PMID: 30131109 PMCID: PMC6107307 DOI: 10.1016/j.heares.2018.06.014] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2018] [Revised: 06/10/2018] [Accepted: 06/19/2018] [Indexed: 12/24/2022]
Abstract
The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.
Collapse
Affiliation(s)
- Lori L Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, 15213, USA; Center for the Neural Basis of Cognition, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Giada Guerra
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK
| | - Frederic Dick
- Department of Psychological Sciences, Birkbeck College, University of London, London, WC1E 7HX, UK; Centre for Brain and Cognitive Development, Birkbeck College, London, WC1E 7HX, UK; Department of Experimental Psychology, University College London, London, WC1H 0AP, UK
| |
Collapse
|
14
|
Daikoku T. Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty. Brain Sci 2018; 8:E114. [PMID: 29921829 PMCID: PMC6025354 DOI: 10.3390/brainsci8060114] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2018] [Revised: 06/14/2018] [Accepted: 06/18/2018] [Indexed: 01/07/2023] Open
Abstract
Statistical learning (SL) is a method of learning based on the transitional probabilities embedded in sequential phenomena such as music and language. It has been considered an implicit and domain-general mechanism that is innate in the human brain and that functions independently of intention to learn and awareness of what has been learned. SL is an interdisciplinary notion that incorporates information technology, artificial intelligence, musicology, and linguistics, as well as psychology and neuroscience. A body of recent study has suggested that SL can be reflected in neurophysiological responses based on the framework of information theory. This paper reviews a range of work on SL in adults and children that suggests overlapping and independent neural correlations in music and language, and that indicates disability of SL. Furthermore, this article discusses the relationships between the order of transitional probabilities (TPs) (i.e., hierarchy of local statistics) and entropy (i.e., global statistics) regarding SL strategies in human's brains; claims importance of information-theoretical approaches to understand domain-general, higher-order, and global SL covering both real-world music and language; and proposes promising approaches for the application of therapy and pedagogy from various perspectives of psychology, neuroscience, computational studies, musicology, and linguistics.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, 04103 Leipzig, Germany.
| |
Collapse
|
15
|
Dhatri SD, Gnanateja GN, Kumar UA, Maruthy S. Gender-bias in the sensory representation of infant cry. Neurosci Lett 2018; 678:138-143. [DOI: 10.1016/j.neulet.2018.04.043] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2017] [Revised: 04/09/2018] [Accepted: 04/23/2018] [Indexed: 10/17/2022]
|
16
|
Ayala YA, Lehmann A, Merchant H. Monkeys share the neurophysiological basis for encoding sound periodicities captured by the frequency-following response with humans. Sci Rep 2017; 7:16687. [PMID: 29192170 PMCID: PMC5709359 DOI: 10.1038/s41598-017-16774-8] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Accepted: 11/17/2017] [Indexed: 11/09/2022] Open
Abstract
The extraction and encoding of acoustical temporal regularities are fundamental for human cognitive auditory abilities such as speech or beat entrainment. Because the comparison of the neural sensitivity to temporal regularities between human and animals is fundamental to relate non-invasive measures of auditory processing to their neuronal basis, here we compared the neural representation of auditory periodicities between human and non-human primates by measuring scalp-recorded frequency-following response (FFR). We found that rhesus monkeys can resolve the spectrotemporal structure of periodic stimuli to a similar extent as humans by exhibiting a homologous FFR potential to the speech syllable /da/. The FFR in both species is robust and phase-locked to the fundamental frequency of the sound, reflecting an effective neural processing of the fast-periodic information of subsyllabic cues. Our results thus reveal a conserved neural ability to track acoustical regularities within the primate order. These findings open the possibility to study the neurophysiology of complex sound temporal processing in the macaque subcortical and cortical areas, as well as the associated experience-dependent plasticity across the auditory pathway in behaving monkeys.
Collapse
Affiliation(s)
- Yaneri A Ayala
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro. 76230, Mexico.
| | - Alexandre Lehmann
- Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, QC, Canada.,International Laboratory for Brain, Music and Sound Research (BRAMS), Center for Research on Brain, Language and Music (CRBLM), Pavillon 1420, Montreal, QC H3C 3J7, Canada.,Department of Psychology, University of Montreal, Montreal, QC, Canada
| | - Hugo Merchant
- Instituto de Neurobiología, UNAM, Campus Juriquilla, Boulevard Juriquilla No. 3001, Querétaro, Qro. 76230, Mexico.
| |
Collapse
|
17
|
Differences between auditory frequency-following responses and onset responses: Intracranial evidence from rat inferior colliculus. Hear Res 2017; 357:25-32. [PMID: 29156225 DOI: 10.1016/j.heares.2017.10.014] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/18/2017] [Revised: 10/14/2017] [Accepted: 10/30/2017] [Indexed: 11/22/2022]
Abstract
A periodic sound, such as a pure tone, evokes both transient onset field-potential responses and sustained frequency-following responses (FFRs) in the auditory midbrain, the inferior colliculus (IC). It is not clear whether the two types of responses are based on the same or different neural substrates. Although it has been assumed that FFRs are based on phase locking to the periodic sound, the evidence showing the direct relationship between the FFR amplitude and the phase-locking strength is still lacking. Using intracranial recordings from the rat central nucleus of inferior colliculus (ICC), this study was to examine whether FFRs and onset responses are different in sensitivity to pure-tone frequency and/or response-stimulus correlation, when a tone stimulus is presented either monaurally or binaurally. Particularly, this study was to examine whether the FFR amplitude is correlated with the strength of phase locking. The results showed that with the increase of tone-stimulus frequency from 1 to 2 kHz, the FFR amplitude decreased but the onset-response amplitude increased. Moreover, the FFR amplitude, but not the onset-response amplitude, was significantly correlated with the phase coherence between tone-evoked potentials and the tone stimulus. Finally, the FFR amplitude was negatively correlated with the onset-response amplitude. These results indicate that periodic-sound-evoked FFRs are based on phase-locking activities of sustained-response neurons, but onset responses are based on transient activities of onset-response neurons, suggesting that FFRs and onset responses are associated with different functions.
Collapse
|
18
|
Involvement of the Serotonin Transporter Gene in Accurate Subcortical Speech Encoding. J Neurosci 2017; 36:10782-10790. [PMID: 27798133 DOI: 10.1523/jneurosci.1595-16.2016] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2016] [Accepted: 08/27/2016] [Indexed: 11/21/2022] Open
Abstract
A flourishing line of evidence has highlighted the encoding of speech sounds in the subcortical auditory system as being shaped by acoustic, linguistic, and musical experience and training. And while the heritability of auditory speech as well as nonspeech processing has been suggested, the genetic determinants of subcortical speech processing have not yet been uncovered. Here, we postulated that the serotonin transporter-linked polymorphic region (5-HTTLPR), a common functional polymorphism located in the promoter region of the serotonin transporter gene (SLC6A4), is implicated in speech encoding in the human subcortical auditory pathway. Serotonin has been shown as essential for modulating the brain response to sound both cortically and subcortically, yet the genetic factors regulating this modulation regarding speech sounds have not been disclosed. We recorded the frequency following response, a biomarker of the neural tracking of speech sounds in the subcortical auditory pathway, and cortical evoked potentials in 58 participants elicited to the syllable /ba/, which was presented >2000 times. Participants with low serotonin transporter expression had higher signal-to-noise ratios as well as a higher pitch strength representation of the periodic part of the syllable than participants with medium to high expression, possibly by tuning synaptic activity to the stimulus features and hence a more efficient suppression of noise. These results imply the 5-HTTLPR in subcortical auditory speech encoding and add an important, genetically determined layer to the factors shaping the human subcortical response to speech sounds. SIGNIFICANCE STATEMENT The accurate encoding of speech sounds in the subcortical auditory nervous system is of paramount relevance for human communication, and it has been shown to be altered in different disorders of speech and auditory processing. Importantly, this encoding is plastic and can therefore be enhanced by language and music experience. Whether genetic factors play a role in speech encoding at the subcortical level remains unresolved. Here we show that a common polymorphism in the serotonin transporter gene relates to an accurate and robust neural tracking of speech stimuli in the subcortical auditory pathway. This indicates that serotonin transporter expression, eventually in combination with other polymorphisms, delimits the extent to which lifetime experience shapes the subcortical encoding of speech.
Collapse
|
19
|
Responses to Predictable versus Random Temporally Complex Stimuli from Single Units in Auditory Thalamus: Impact of Aging and Anesthesia. J Neurosci 2017; 36:10696-10706. [PMID: 27733619 DOI: 10.1523/jneurosci.1454-16.2016] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Accepted: 08/27/2016] [Indexed: 12/12/2022] Open
Abstract
Human aging studies suggest that an increased use of top-down knowledge-based resources would compensate for degraded upstream acoustic information to accurately identify important temporally rich signals. Sinusoidal amplitude-modulated (SAM) stimuli have been used to mimic the fast-changing temporal features in speech and species-specific vocalizations. Single units were recorded from auditory thalamus [medial geniculate body (MGB)] of young awake, aged awake, young anesthetized, and aged anesthetized rats. SAM stimuli were modulated between 2 and 1024 Hz with the modulation frequency (fm) changed randomly (RAN) across trials or sequentially (SEQ) after several repeated trials. Units were found to be RAN-preferring, SEQ-preferring, or nonselective based on total firing rate. Significant anesthesia and age effects were found. The majority (86%) of young anesthetized units preferred RAN SAM stimuli; significantly fewer young awake units (51%, p < 0.0001) preferred RAN SAM signals with 16% preferring SEQ SAM. Compared with young awake units, there was a significant increase of aged awake units preferring SEQ SAM (30%, p < 0.05). We examined RAN versus SEQ differences across fms by measuring selective fm areas under the rate modulation transfer function curve. The largest age-related differences from awake animals were found for mid-to-high fms in MGB units, with young units preferring RAN SAM while aged units showed a greater preference for SEQ-presented SAM. Together, these findings suggest that aged MGB units/animals employ increased top-down mediated stimulus context to enhance processing of "expected" temporally rich stimuli, especially at more challenging higher fms. SIGNIFICANCE STATEMENT Older individuals compensate for impaired ascending acoustic information by increasing use of cortical cognitive and attentional resources. The interplay between ascending and descending influences in the thalamus may serve to enhance the salience of speech signals that are degraded as they ascend to the cortex. The present findings demonstrate that medial geniculate body units from awake rats show an age-related preference for predictable modulated signals relative to randomly presented signals, especially at higher, more challenging modulation frequencies. Conversely, units from anesthetized animals, with little top-down influences, strongly preferred randomly presented modulated sequences. These results suggest a neuronal substrate for an age-related increase in experience/attentional-based influences in processing temporally complex auditory information in the auditory thalamus.
Collapse
|
20
|
Maruthy S, Kumar UA, Gnanateja GN. Functional Interplay Between the Putative Measures of Rostral and Caudal Efferent Regulation of Speech Perception in Noise. J Assoc Res Otolaryngol 2017; 18:635-648. [PMID: 28447225 DOI: 10.1007/s10162-017-0623-y] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 03/22/2017] [Indexed: 01/23/2023] Open
Abstract
Efferent modulation has been demonstrated to be very important for speech perception, especially in the presence of noise. We examined the functional relationship between two efferent systems: the rostral and caudal efferent pathways and their individual influences on speech perception in noise. Earlier studies have shown that these two efferent mechanisms were correlated with speech perception in noise. However, previously, these mechanisms were studied in isolation, and their functional relationship with each other was not investigated. We used a correlational design to study the relationship if any, between these two mechanisms in young and old normal hearing individuals. We recorded context-dependent brainstem encoding as an index of rostral efferent function and contralateral suppression of otoacoustic emissions as an index of caudal efferent function in groups with good and poor speech perception in noise. These efferent mechanisms were analysed for their relationship with each other and with speech perception in noise. We found that the two efferent mechanisms did not show any functional relationship. Interestingly, both the efferent mechanisms correlated with speech perception in noise and they even emerged as significant predictors. Based on the data, we posit that the two efferent mechanisms function relatively independently but with a common goal of fine-tuning the afferent input and refining auditory perception in degraded listening conditions.
Collapse
Affiliation(s)
- Sandeep Maruthy
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India
| | - U Ajith Kumar
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India
| | - G Nike Gnanateja
- Electrophysiology Laboratory, Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore, Karnataka, IN-570006, India.
| |
Collapse
|
21
|
Slugocki C, Bosnyak D, Trainor LJ. Simultaneously-evoked auditory potentials (SEAP): A new method for concurrent measurement of cortical and subcortical auditory-evoked activity. Hear Res 2017; 345:30-42. [DOI: 10.1016/j.heares.2016.12.014] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Revised: 12/07/2016] [Accepted: 12/16/2016] [Indexed: 10/20/2022]
|
22
|
The Role of the Auditory Brainstem in Regularity Encoding and Deviance Detection. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
23
|
Maggu AR, Liu F, Antoniou M, Wong PCM. Neural Correlates of Indicators of Sound Change in Cantonese: Evidence from Cortical and Subcortical Processes. Front Hum Neurosci 2016; 10:652. [PMID: 28066218 PMCID: PMC5179532 DOI: 10.3389/fnhum.2016.00652] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2016] [Accepted: 12/08/2016] [Indexed: 12/05/2022] Open
Abstract
Across time, languages undergo changes in phonetic, syntactic, and semantic dimensions. Social, cognitive, and cultural factors contribute to sound change, a phenomenon in which the phonetics of a language undergo changes over time. Individuals who misperceive and produce speech in a slightly divergent manner (called innovators) contribute to variability in the society, eventually leading to sound change. However, the cause of variability in these individuals is still unknown. In this study, we examined whether such misperceptions are represented in neural processes of the auditory system. We investigated behavioral, subcortical (via FFR), and cortical (via P300) manifestations of sound change processing in Cantonese, a Chinese language in which several lexical tones are merging. Across the merging categories, we observed a similar gradation of speech perception abilities in both behavior and the brain (subcortical and cortical processes). Further, we also found that behavioral evidence of tone merging correlated with subjects' encoding at the subcortical and cortical levels. These findings indicate that tone-merger categories, that are indicators of sound change in Cantonese, are represented neurophysiologically with high fidelity. Using our results, we speculate that innovators encode speech in a slightly deviant neurophysiological manner, and thus produce speech divergently that eventually spreads across the community and contributes to sound change.
Collapse
Affiliation(s)
- Akshay R Maggu
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong Hong Kong, China
| | - Fang Liu
- School of Psychology and Clinical Language Sciences, University of Reading Reading, UK
| | - Mark Antoniou
- The MARCS Institute for Brain, Behaviour and Development, Western Sydney University Penrith, NSW, Australia
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong KongHong Kong, China; Brain and Mind Institute, The Chinese University of Hong KongHong Kong, China; The Chinese University of Hong Kong-Utrecht University Joint Center for Language, Mind and BrainHong Kong, China
| |
Collapse
|
24
|
Gorina-Careta N, Zarnowiec K, Costa-Faidella J, Escera C. Timing predictability enhances regularity encoding in the human subcortical auditory pathway. Sci Rep 2016; 6:37405. [PMID: 27853313 PMCID: PMC5112601 DOI: 10.1038/srep37405] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2016] [Accepted: 10/27/2016] [Indexed: 11/24/2022] Open
Abstract
The encoding of temporal regularities is a critical property of the auditory system, as short-term neural representations of environmental statistics serve to auditory object formation and detection of potentially relevant novel stimuli. A putative neural mechanism underlying regularity encoding is repetition suppression, the reduction of neural activity to repeated stimulation. Although repetitive stimulation per se has shown to reduce auditory neural activity in animal cortical and subcortical levels and in the human cerebral cortex, other factors such as timing may influence the encoding of statistical regularities. This study was set out to investigate whether temporal predictability in the ongoing auditory input modulates repetition suppression in subcortical stages of the auditory processing hierarchy. Human auditory frequency–following responses (FFR) were recorded to a repeating consonant–vowel stimuli (/wa/) delivered in temporally predictable and unpredictable conditions. FFR amplitude was attenuated by repetition independently of temporal predictability, yet we observed an accentuated suppression when the incoming stimulation was temporally predictable. These findings support the view that regularity encoding spans across the auditory hierarchy and point to temporal predictability as a modulatory factor of regularity encoding in early stages of the auditory pathway.
Collapse
Affiliation(s)
- Natàlia Gorina-Careta
- Institute of Neurosciences, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Santa Rosa 39-57, 08950, Esplugues de Llobregat, Catalonia, Spain
| | - Katarzyna Zarnowiec
- Institute of Neurosciences, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain
| | - Jordi Costa-Faidella
- Institute of Neurosciences, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain
| | - Carles Escera
- Institute of Neurosciences, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Brainlab - Cognitive Neuroscience Research Group, Department of Clinical Psychology and Psychobiology, University of Barcelona, P. Vall d'Hebron 171, 08035, Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Santa Rosa 39-57, 08950, Esplugues de Llobregat, Catalonia, Spain
| |
Collapse
|
25
|
Lehmann A, Arias DJ, Schönwiesner M. Tracing the neural basis of auditory entrainment. Neuroscience 2016; 337:306-314. [PMID: 27667358 DOI: 10.1016/j.neuroscience.2016.09.011] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2016] [Revised: 08/17/2016] [Accepted: 09/08/2016] [Indexed: 11/25/2022]
Abstract
Neurons in the auditory cortex synchronize their responses to temporal regularities in sound input. This coupling or "entrainment" is thought to facilitate beat extraction and rhythm perception in temporally structured sounds, such as music. As a consequence of such entrainment, the auditory cortex responds to an omitted (silent) sound in a regular sequence. Although previous studies suggest that the auditory brainstem frequency-following response (FFR) exhibits some of the beat-related effects found in the cortex, it is unknown whether omissions of sounds evoke a brainstem response. We simultaneously recorded cortical and brainstem responses to isochronous and irregular sequences of consonant-vowel syllable /da/ that contained sporadic omissions. The auditory cortex responded strongly to omissions, but we found no evidence of evoked responses to omitted stimuli from the auditory brainstem. However, auditory brainstem responses in the isochronous sound sequence were more consistent across trials than in the irregular sequence. These results indicate that the auditory brainstem faithfully encodes short-term acoustic properties of a stimulus and is sensitive to sequence regularity, but does not entrain to isochronous sequences sufficiently to generate overt omission responses, even for sequences that evoke such responses in the cortex. These findings add to our understanding of the processing of sound regularities, which is an important aspect of human cognitive abilities like rhythm, music and speech perception.
Collapse
Affiliation(s)
- Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada; Department of Psychology, University of Montreal, Montreal, QC, Canada; Department of Otolaryngology Head & Neck Surgery, McGill University, Montreal, QC, Canada
| | - Diana Jimena Arias
- University of Quebec at Montreal, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada.
| | - Marc Schönwiesner
- International Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), Montreal, QC, Canada; Department of Psychology, University of Montreal, Montreal, QC, Canada
| |
Collapse
|
26
|
Enhanced brainstem and cortical encoding of sound during synchronized movement. Neuroimage 2016; 142:231-240. [PMID: 27397623 DOI: 10.1016/j.neuroimage.2016.07.015] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2016] [Revised: 07/05/2016] [Accepted: 07/06/2016] [Indexed: 01/23/2023] Open
Abstract
Movement to a steady beat has been widely studied as a model of alignment of motor outputs on sensory inputs. However, how the encoding of sensory inputs is shaped during synchronized movements along the sensory pathway remains unknown. To investigate this, we simultaneously recorded brainstem and cortical electro-encephalographic activity while participants listened to periodic amplitude-modulated tones. Participants listened either without moving or while tapping in sync on every second beat. Cortical responses were identified at the envelope modulation rate (beat frequency), whereas brainstem responses were identified at the partials frequencies of the chord and at their modulation by the beat frequency (sidebands). During sensorimotor synchronization, cortical responses at beat frequency were larger than during passive listening. Importantly, brainstem responses were also enhanced, with a selective amplification of the sidebands, in particular at the lower-pitched tone of the chord, and no significant correlation with electromyographic measures at tapping frequency. These findings provide first evidence for an online gain in the cortical and subcortical encoding of sounds during synchronized movement, selective to behavior-relevant sound features. Moreover, the frequency-tagging method to isolate concurrent brainstem and cortical activities even during actual movements appears promising to reveal coordinated processes along the human auditory pathway.
Collapse
|
27
|
Gabriel D, Wong TC, Nicolier M, Giustiniani J, Mignot C, Noiret N, Monnin J, Magnin E, Pazart L, Moulin T, Haffen E, Vandel P. Don't forget the lyrics! Spatiotemporal dynamics of neural mechanisms spontaneously evoked by gaps of silence in familiar and newly learned songs. Neurobiol Learn Mem 2016; 132:18-28. [PMID: 27131744 DOI: 10.1016/j.nlm.2016.04.011] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Revised: 04/18/2016] [Accepted: 04/24/2016] [Indexed: 10/21/2022]
Abstract
The vast majority of people experience musical imagery, the sensation of reliving a song in absence of any external stimulation. Internal perception of a song can be deliberate and effortful, but also may occur involuntarily and spontaneously. Moreover, musical imagery is also involuntarily used for automatically completing missing parts of music or lyrics from a familiar song. The aim of our study was to explore the onset of musical imagery dynamics that leads to the automatic completion of missing lyrics. High-density electroencephalography was used to record the cerebral activity of twenty healthy volunteers while they were passively listening to unfamiliar songs, very familiar songs, and songs previously listened to for two weeks. Silent gaps inserted into these songs elicited a series of neural activations encompassing perceptual, attentional and cognitive mechanisms (range 100-500ms). Familiarity and learning effects emerged as early as 100ms and lasted 400ms after silence occurred. Although participants reported more easily mentally imagining lyrics in familiar rather than passively learnt songs, the onset of neural mechanisms and the power spectrum underlying musical imagery were similar for both types of songs. This study offers new insights into the musical imagery dynamics evoked by gaps of silence and on the role of familiarity and learning processes in the generation of these dynamics. The automatic and effortless method presented here is a potentially useful tool to understand failure in the familiarity and learning processes of pathological populations.
Collapse
Affiliation(s)
- Damien Gabriel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France.
| | - Thian Chiew Wong
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France
| | - Magali Nicolier
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Julie Giustiniani
- Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Coralie Mignot
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France
| | - Nicolas Noiret
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Laboratoire de psychologie EA 3188, Université de Franche-Comté, Besançon, France
| | - Julie Monnin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Eloi Magnin
- Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Lionel Pazart
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France
| | - Thierry Moulin
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de neurologie, CHRU Besançon, F-25000 Besançon, France
| | - Emmanuel Haffen
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France
| | - Pierre Vandel
- Centre d'investigation Clinique-Innovation Technologique CIC-IT 1431, Inserm, CHRU Besançon, F-25000 Besançon, France; Neurosciences intégratives et cliniques EA 481, Univ. Franche-Comté, Univ. Bourgogne Franche-Comté, F-25000 Besançon, France; Service de psychiatrie de l'adulte, CHRU Besançon, F-25000 Besançon, France; Centre Mémoire de Ressource et de Recherche de Franche-Comté, CHRU Besançon, F-25000 Besançon, France
| |
Collapse
|
28
|
Anderson S, Jenkins K. Electrophysiologic Assessment of Auditory Training Benefits in Older Adults. Semin Hear 2015; 36:250-62. [PMID: 27587912 PMCID: PMC4910540 DOI: 10.1055/s-0035-1564455] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
Older adults often exhibit speech perception deficits in difficult listening environments. At present, hearing aids or cochlear implants are the main options for therapeutic remediation; however, they only address audibility and do not compensate for central processing changes that may accompany aging and hearing loss or declines in cognitive function. It is unknown whether long-term hearing aid or cochlear implant use can restore changes in central encoding of temporal and spectral components of speech or improve cognitive function. Therefore, consideration should be given to auditory/cognitive training that targets auditory processing and cognitive declines, taking advantage of the plastic nature of the central auditory system. The demonstration of treatment efficacy is an important component of any training strategy. Electrophysiologic measures can be used to assess training-related benefits. This article will review the evidence for neuroplasticity in the auditory system and the use of evoked potentials to document treatment efficacy.
Collapse
Affiliation(s)
- Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland
| | - Kimberly Jenkins
- Department of Hearing and Speech Sciences, University of Maryland
| |
Collapse
|
29
|
Skoe E, Krizman J, Spitzer E, Kraus N. Prior experience biases subcortical sensitivity to sound patterns. J Cogn Neurosci 2015; 27:124-40. [PMID: 25061926 DOI: 10.1162/jocn_a_00691] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023]
Abstract
To make sense of our ever-changing world, our brains search out patterns. This drive can be so strong that the brain imposes patterns when there are none. The opposite can also occur: The brain can overlook patterns because they do not conform to expectations. In this study, we examined this neural sensitivity to patterns within the auditory brainstem, an evolutionarily ancient part of the brain that can be fine-tuned by experience and is integral to an array of cognitive functions. We have recently shown that this auditory hub is sensitive to patterns embedded within a novel sound stream, and we established a link between neural sensitivity and behavioral indices of learning [Skoe, E., Krizman, J., Spitzer, E., & Kraus, N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience, 243, 104-114, 2013]. We now ask whether this sensitivity to stimulus statistics is biased by prior experience and the expectations arising from this experience. To address this question, we recorded complex auditory brainstem responses (cABRs) to two patterned sound sequences formed from a set of eight repeating tones. For both patterned sequences, the eight tones were presented such that the transitional probability (TP) between neighboring tones was either 33% (low predictability) or 100% (high predictability). Although both sequences were novel to the healthy young adult listener and had similar TP distributions, one was perceived to be more musical than the other. For the more musical sequence, participants performed above chance when tested on their recognition of the most predictable two-tone combinations within the sequence (TP of 100%); in this case, the cABR differed from a baseline condition where the sound sequence had no predictable structure. In contrast, for the less musical sequence, learning was at chance, suggesting that listeners were "deaf" to the highly predictable repeating two-tone combinations in the sequence. For this condition, the cABR also did not differ from baseline. From this, we posit that the brainstem acts as a Bayesian sound processor, such that it factors in prior knowledge about the environment to index the probability of particular events within ever-changing sensory conditions.
Collapse
|
30
|
Evidence against attentional state modulating scalp-recorded auditory brainstem steady-state responses. Brain Res 2015; 1626:146-64. [PMID: 26187756 DOI: 10.1016/j.brainres.2015.06.038] [Citation(s) in RCA: 61] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2015] [Revised: 06/18/2015] [Accepted: 06/24/2015] [Indexed: 11/20/2022]
Abstract
Auditory brainstem responses (ABRs) and their steady-state counterpart (subcortical steady-state responses, SSSRs) are generally thought to be insensitive to cognitive demands. However, a handful of studies report that SSSRs are modulated depending on the subject׳s focus of attention, either towards or away from an auditory stimulus. Here, we explored whether attentional focus affects the envelope-following response (EFR), which is a particular kind of SSSR, and if so, whether the effects are specific to which sound elements in a sound mixture a subject is attending (selective auditory attentional modulation), specific to attended sensory input (inter-modal attentional modulation), or insensitive to attentional focus. We compared the strength of EFR-stimulus phase locking in human listeners under various tasks: listening to a monaural stimulus, selectively attending to a particular ear during dichotic stimulus presentation, and attending to visual stimuli while ignoring dichotic auditory inputs. We observed no systematic changes in the EFR across experimental manipulations, even though cortical EEG revealed attention-related modulations of alpha activity during the task. We conclude that attentional effects, if any, on human subcortical representation of sounds cannot be observed robustly using EFRs. This article is part of a Special Issue entitled SI: Prediction and Attention.
Collapse
|
31
|
Malmierca MS, Anderson LA, Antunes FM. The cortical modulation of stimulus-specific adaptation in the auditory midbrain and thalamus: a potential neuronal correlate for predictive coding. Front Syst Neurosci 2015; 9:19. [PMID: 25805974 PMCID: PMC4353371 DOI: 10.3389/fnsys.2015.00019] [Citation(s) in RCA: 77] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2014] [Accepted: 02/03/2015] [Indexed: 02/02/2023] Open
Abstract
To follow an ever-changing auditory scene, the auditory brain is continuously creating a representation of the past to form expectations about the future. Unexpected events will produce an error in the predictions that should “trigger” the network’s response. Indeed, neurons in the auditory midbrain, thalamus and cortex, respond to rarely occurring sounds while adapting to frequently repeated ones, i.e., they exhibit stimulus specific adaptation (SSA). SSA cannot be explained solely by intrinsic membrane properties, but likely involves the participation of the network. Thus, SSA is envisaged as a high order form of adaptation that requires the influence of cortical areas. However, present research supports the hypothesis that SSA, at least in its simplest form (i.e., to frequency deviants), can be transmitted in a bottom-up manner through the auditory pathway. Here, we briefly review the underlying neuroanatomy of the corticofugal projections before discussing state of the art studies which demonstrate that SSA present in the medial geniculate body (MGB) and inferior colliculus (IC) is not inherited from the cortex but can be modulated by the cortex via the corticofugal pathways. By modulating the gain of neurons in the thalamus and midbrain, the auditory cortex (AC) would refine SSA subcortically, preventing irrelevant information from reaching the cortex.
Collapse
Affiliation(s)
- Manuel S Malmierca
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain ; Faculty of Medicine, Department of Cell Biology and Pathology, University of Salamanca Salamanca, Spain
| | - Lucy A Anderson
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain
| | - Flora M Antunes
- Auditory Neuroscience Laboratory, Institute of Neuroscience of Castilla y León (INCyL), University of Salamanca Salamanca, Spain
| |
Collapse
|
32
|
Skoe E, Chandrasekaran B, Spitzer ER, Wong PC, Kraus N. Human brainstem plasticity: The interaction of stimulus probability and auditory learning. Neurobiol Learn Mem 2014; 109:82-93. [DOI: 10.1016/j.nlm.2013.11.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2013] [Revised: 10/14/2013] [Accepted: 11/18/2013] [Indexed: 10/26/2022]
|
33
|
Pérez-González D, Malmierca MS. Adaptation in the auditory system: an overview. Front Integr Neurosci 2014; 8:19. [PMID: 24600361 PMCID: PMC3931124 DOI: 10.3389/fnint.2014.00019] [Citation(s) in RCA: 105] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2013] [Accepted: 02/05/2014] [Indexed: 11/13/2022] Open
Abstract
The early stages of the auditory system need to preserve the timing information of sounds in order to extract the basic features of acoustic stimuli. At the same time, different processes of neuronal adaptation occur at several levels to further process the auditory information. For instance, auditory nerve fiber responses already experience adaptation of their firing rates, a type of response that can be found in many other auditory nuclei and may be useful for emphasizing the onset of the stimuli. However, it is at higher levels in the auditory hierarchy where more sophisticated types of neuronal processing take place. For example, stimulus-specific adaptation, where neurons show adaptation to frequent, repetitive stimuli, but maintain their responsiveness to stimuli with different physical characteristics, thus representing a distinct kind of processing that may play a role in change and deviance detection. In the auditory cortex, adaptation takes more elaborate forms, and contributes to the processing of complex sequences, auditory scene analysis and attention. Here we review the multiple types of adaptation that occur in the auditory system, which are part of the pool of resources that the neurons employ to process the auditory scene, and are critical to a proper understanding of the neuronal mechanisms that govern auditory perception.
Collapse
Affiliation(s)
- David Pérez-González
- Auditory Neurophysiology Laboratory (Lab 1), Institute of Neuroscience of Castilla y León, University of Salamanca Salamanca, Spain
| | - Manuel S Malmierca
- Auditory Neurophysiology Laboratory (Lab 1), Institute of Neuroscience of Castilla y León, University of Salamanca Salamanca, Spain ; Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca Salamanca, Spain
| |
Collapse
|
34
|
The layering of auditory experiences in driving experience-dependent subcortical plasticity. Hear Res 2014; 311:36-48. [PMID: 24445149 DOI: 10.1016/j.heares.2014.01.002] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/30/2013] [Revised: 12/26/2013] [Accepted: 01/07/2014] [Indexed: 01/23/2023]
Abstract
In this review article, we focus on recent studies of experiential influences on brainstem function. Using these studies as scaffolding, we then lay the initial groundwork for the Layering Hypothesis, which explicates how experiences combine to shape subcortical auditory function. Our hypothesis builds on the idea that the subcortical auditory system reflects the collective auditory experiences of an individual, including interactions with sound that occurred in the distant past. Our goal for this article is to begin to shift the field away from examining the effect of single experiences to examining how different auditory experiences layer or superimpose on each other. This article is part of a Special Issue entitled <Annual Reviews 2014>.
Collapse
|
35
|
Lehmann A, Schönwiesner M. Selective attention modulates human auditory brainstem responses: relative contributions of frequency and spatial cues. PLoS One 2014; 9:e85442. [PMID: 24454869 PMCID: PMC3893196 DOI: 10.1371/journal.pone.0085442] [Citation(s) in RCA: 56] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2013] [Accepted: 11/27/2013] [Indexed: 11/18/2022] Open
Abstract
Selective attention is the mechanism that allows focusing one’s attention on a particular stimulus while filtering out a range of other stimuli, for instance, on a single conversation in a noisy room. Attending to one sound source rather than another changes activity in the human auditory cortex, but it is unclear whether attention to different acoustic features, such as voice pitch and speaker location, modulates subcortical activity. Studies using a dichotic listening paradigm indicated that auditory brainstem processing may be modulated by the direction of attention. We investigated whether endogenous selective attention to one of two speech signals affects amplitude and phase locking in auditory brainstem responses when the signals were either discriminable by frequency content alone, or by frequency content and spatial location. Frequency-following responses to the speech sounds were significantly modulated in both conditions. The modulation was specific to the task-relevant frequency band. The effect was stronger when both frequency and spatial information were available. Patterns of response were variable between participants, and were correlated with psychophysical discriminability of the stimuli, suggesting that the modulation was biologically relevant. Our results demonstrate that auditory brainstem responses are susceptible to efferent modulation related to behavioral goals. Furthermore they suggest that mechanisms of selective attention actively shape activity at early subcortical processing stages according to task relevance and based on frequency and spatial cues.
Collapse
Affiliation(s)
- Alexandre Lehmann
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
| | - Marc Schönwiesner
- Laboratory for Brain, Music and Sound Research (BRAMS), Montreal, Canada
- Department of Psychology, University of Montreal, Montreal, Canada
- Centre for Research on Brain, Language and Music (CRBLM), Montreal, Canada
- Montreal Neurological Institute, McGill University, Montreal, Canada
- * E-mail:
| |
Collapse
|
36
|
Kraus N, Nicol T. The Cognitive Auditory System: The Role of Learning in Shaping the Biology of the Auditory System. PERSPECTIVES ON AUDITORY RESEARCH 2014. [DOI: 10.1007/978-1-4614-9102-6_17] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
|
37
|
Tarasenko MA, Swerdlow NR, Makeig S, Braff DL, Light GA. The auditory brain-stem response to complex sounds: a potential biomarker for guiding treatment of psychosis. Front Psychiatry 2014; 5:142. [PMID: 25352811 PMCID: PMC4195270 DOI: 10.3389/fpsyt.2014.00142] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/30/2014] [Accepted: 09/25/2014] [Indexed: 12/28/2022] Open
Abstract
Cognitive deficits limit psychosocial functioning in schizophrenia. For many patients, cognitive remediation approaches have yielded encouraging results. Nevertheless, therapeutic response is variable, and outcome studies consistently identify individuals who respond minimally to these interventions. Biomarkers that can assist in identifying patients likely to benefit from particular forms of cognitive remediation are needed. Here, we describe an event-related potential (ERP) biomarker - the auditory brain-stem response (ABR) to complex sounds (cABR) - that appears to be particularly well-suited for predicting response to at least one form of cognitive remediation that targets auditory information processing. Uniquely, the cABR quantifies the fidelity of sound encoded at the level of the brainstem and midbrain. This ERP biomarker has revealed auditory processing abnormalities in various neurodevelopmental disorders, correlates with functioning across several cognitive domains, and appears to be responsive to targeted auditory training. We present preliminary cABR data from 18 schizophrenia patients and propose further investigation of this biomarker for predicting and tracking response to cognitive interventions.
Collapse
Affiliation(s)
- Melissa A Tarasenko
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Neal R Swerdlow
- Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Scott Makeig
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego , La Jolla, CA , USA
| | - David L Braff
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| | - Gregory A Light
- VISN-22 Mental Illness, Research, Education and Clinical Center (MIRECC), VA San Diego Healthcare System , La Jolla, CA , USA ; Department of Psychiatry, University of California San Diego , La Jolla, CA , USA
| |
Collapse
|
38
|
Chandrasekaran B, Skoe E, Kraus N. An integrative model of subcortical auditory plasticity. Brain Topogr 2013; 27:539-52. [PMID: 24150692 DOI: 10.1007/s10548-013-0323-9] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2013] [Accepted: 10/05/2013] [Indexed: 11/26/2022]
Abstract
In direct conflict with the concept of auditory brainstem nuclei as passive relay stations for behaviorally-relevant signals, recent studies have demonstrated plasticity of the auditory signal in the brainstem. In this paper we provide an overview of the forms of plasticity evidenced in subcortical auditory regions. We posit an integrative model of auditory plasticity, which argues for a continuous, online modulation of bottom-up signals via corticofugal pathways, based on an algorithm that anticipates and updates incoming stimulus regularities. We discuss the negative implications of plasticity in clinical dysfunction and propose novel methods of eliciting brainstem responses that could specify the biological nature of auditory processing deficits.
Collapse
Affiliation(s)
- Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, Center for Perceptual Systems, Institute for Neuroscience, The University of Texas at Austin, Austin, TX, USA,
| | | | | |
Collapse
|
39
|
Cornella M, Leung S, Grimm S, Escera C. Regularity encoding and deviance detection of frequency modulated sweeps: Human middle- and long-latency auditory evoked potentials. Psychophysiology 2013; 50:1275-81. [DOI: 10.1111/psyp.12137] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2013] [Accepted: 07/01/2013] [Indexed: 11/29/2022]
Affiliation(s)
- Miriam Cornella
- Institute for Brain, Cognition and Behavior (IR3C) and Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology; University of Barcelona; Catalonia Spain
| | - Sumie Leung
- Institute for Brain, Cognition and Behavior (IR3C) and Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology; University of Barcelona; Catalonia Spain
| | - Sabine Grimm
- Institute for Brain, Cognition and Behavior (IR3C) and Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology; University of Barcelona; Catalonia Spain
| | - Carles Escera
- Institute for Brain, Cognition and Behavior (IR3C) and Cognitive Neuroscience Research Group, Department of Psychiatry and Clinical Psychobiology; University of Barcelona; Catalonia Spain
| |
Collapse
|
40
|
Skoe E, Krizman J, Spitzer E, Kraus N. The auditory brainstem is a barometer of rapid auditory learning. Neuroscience 2013; 243:104-14. [DOI: 10.1016/j.neuroscience.2013.03.009] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2012] [Revised: 03/11/2013] [Accepted: 03/12/2013] [Indexed: 10/27/2022]
|
41
|
Gnanateja GN, Ranjan R, Firdose H, Sinha SK, Maruthy S. Acoustic basis of context dependent brainstem encoding of speech. Hear Res 2013; 304:28-32. [PMID: 23792077 DOI: 10.1016/j.heares.2013.06.002] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 05/28/2013] [Accepted: 06/03/2013] [Indexed: 11/15/2022]
Abstract
The newfound context dependent brainstem encoding of speech is evidence of online regularity detection and modulation of the sub-cortical responses. We studied the influence of spectral structure of the contextual stimulus on context dependent encoding of speech at the brainstem, in an attempt to understand the acoustic basis for this effect. Fourteen normal hearing adults participated in a randomized true experimental design in whom brainstem responses were recorded. Brainstem responses for a high pass filtered /da/ in the context of syllables, that either had same or different spectral structure were compared with each other. The findings suggest that spectral structure is one of the parameters which cue the context dependent sub-cortical encoding of speech. Interestingly, the results also revealed that, brainstem can encode pitch even with negligible acoustic information below the second formant frequency.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Audiology, All India Institute of Speech and Hearing, Manasagangothri, Mysore 570006, Karnataka, India.
| | | | | | | | | |
Collapse
|
42
|
Bourquin NMP, Murray MM, Clarke S. Location-independent and location-linked representations of sound objects. Neuroimage 2013; 73:40-9. [DOI: 10.1016/j.neuroimage.2013.01.026] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2012] [Revised: 01/14/2013] [Accepted: 01/16/2013] [Indexed: 11/24/2022] Open
|
43
|
Hairston WD, Letowski TR, McDowell K. Task-related suppression of the brainstem frequency following response. PLoS One 2013; 8:e55215. [PMID: 23441150 PMCID: PMC3575437 DOI: 10.1371/journal.pone.0055215] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2012] [Accepted: 12/20/2012] [Indexed: 11/25/2022] Open
Abstract
Recent evidence has shown top-down modulation of the brainstem frequency following response (FFR), generally in the form of signal enhancement from concurrent stimuli or from switching between attention-demanding task stimuli. However, it is also possible that the opposite may be true--the addition of a task, instead of a resting, passive state may suppress the FFR. Here we examined the influence of a subsequent task, and the relevance of the task modality, on signal clarity within the FFR. Participants performed visual and auditory discrimination tasks in the presence of an irrelevant background sound, as well as a baseline consisting of the same background stimuli in the absence of a task. FFR pitch strength and amplitude of the primary frequency response were assessed within non-task stimulus periods in order to examine influences due solely to general cognitive state, independent of stimulus-driven effects. Results show decreased signal clarity with the addition of a task, especially within the auditory modality. We additionally found consistent relationships between the extent of this suppressive effect and perceptual measures such as response time and proclivity towards one sensory modality. Together these results suggest that the current focus of attention can have a global, top-down effect on the quality of encoding early in the auditory pathway.
Collapse
Affiliation(s)
- W David Hairston
- Human Research and Engineering Directorate, United States Army Research Laboratory, Aberdeen Proving Ground, Maryland, United States of America.
| | | | | |
Collapse
|
44
|
Looi V, Gfeller K, Driscoll V. MUSIC APPRECIATION AND TRAINING FOR COCHLEAR IMPLANT RECIPIENTS: A REVIEW. Semin Hear 2012; 33:307-334. [PMID: 23459244 DOI: 10.1055/s-0032-1329222] [Citation(s) in RCA: 91] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022] Open
Abstract
In recent years, there has been increasing interest in music perception of cochlear implant (CI) recipients, and a growing body of research conducted in this area. The majority of these studies have examined perceptual accuracy for pitch, rhythm, and timbre. Another important, but less commonly studied aspect of music listening is appreciation, or appraisal. Despite the ongoing research into potential technological improvements that may improve music perception for recipients, both perceptual accuracy and appreciation generally remain poor for most recipients. Whilst perceptual accuracy for music is important, appreciation and enjoyment also warrants research as it also contributes to clinical outcomes and perceived benefits. Music training is being shown to offer excellent potential for improving music perception and appreciation for recipients.Therefore, the primary topics of this review are music appreciation and training. However, a brief overview of the psychoacoustic, technical, and physiological factors associated with a recipient's perception of music is provided, as these are important factors in understanding the listening experience for CI recipients. The purpose of this review is to summarize key papers that have investigated these issues, in order to demonstrate that i) music enjoyment and appraisal is an important and valid consideration in evaluating music outcomes for recipients, and ii) that music training can improve music listening for many recipients, and is something that can be offered to persons using current technology.
Collapse
Affiliation(s)
- Valerie Looi
- c/o Cochlear - Asia Pacific, 1 University Ave, Macquarie University 2109 NSW
| | | | | |
Collapse
|
45
|
Vilela N, Wertzner HF, Sanches SGG, Neves-Lobo IF, Carvallo RMM. Temporal processing in children with phonological disorders submitted to auditory training: a pilot study. JORNAL DA SOCIEDADE BRASILEIRA DE FONOAUDIOLOGIA 2012; 24:42-8. [PMID: 22460371 DOI: 10.1590/s2179-64912012000100008] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/24/2011] [Accepted: 02/08/2012] [Indexed: 11/21/2022]
Abstract
PURPOSE This study compared the temporal processing performance of children with phonological disorders submitted to formal and informal auditory training. METHODS Fifteen subjects with phonological disorder (pure tone thresholds ≤20 dBHL from 0.50 to 4 kHz, and age between 7 years and 10 years and 11 months) were evaluated, divided into three groups: CONTROL GROUP five subjects (mean age 9.1 years) without auditory processing disorder, who passed through two evaluations of the auditory processing at intervals of six to eight weeks and without any intervention; Formal Training Group with five subjects (average 8.3 years) with auditory processing disorder submitted to eight sessions of formal training; and Informal Training Group, with five subjects (average 8.1 years) with auditory processing disorder submitted to eight sessions of informal training. RESULTS After eight sessions the formal training group showed an improvement of 8% and the informal training group of 22.5% in comparison with the pitch pattern sequence test. For the duration pattern sequence test, the average of the formal training group improved by 12.9% and the informal training group by 18.7%. There was no statistical difference between the means obtained by both groups after intervention, neither in the pitch pattern nor in the duration pattern sequence test. CONCLUSION Although the results did not present significant differences, this pilot study suggests that both formal and informal trainings provide improvement in the temporal processing abilities of children with phonological and auditory processing disorders.
Collapse
Affiliation(s)
- Nadia Vilela
- Faculdade de Medicina, Universidade de São Paulo, 140 Cidade Universitária, São Paulo, SP, Brazil.
| | | | | | | | | |
Collapse
|
46
|
Abstract
Auditory deviance detection has been associated with a human auditory-evoked potential (AEP), the mismatch negativity, generated in the auditory cortex 100-200 ms from sound change onset. Yet, single-unit recordings in animals suggest much earlier (∼20-40 ms), and anatomically lower (i.e., thalamus and midbrain) deviance detection. In humans, recordings of the scalp middle-latency AEPs have confirmed early (∼30-40 ms) deviance detection. However, involvement of the human auditory brainstem in deviance detection has not yet been demonstrated. Here we recorded the auditory brainstem frequency-following response (FFR) to consonant-vowel stimuli (/ba/, /wa/) in young adults, with stimuli arranged in oddball and reversed oddball blocks (deviant probability, p=0.2), allowing for the comparison of FFRs to the same physical stimuli presented in different contextual roles. Whereas no effect was observed for the /wa/ syllable, we found for the /ba/ syllable a reduction in the brainstem FFR to deviant stimuli compared with standard ones and to similar stimuli arranged in a control block, with five equiprobable, rarely occurring sounds. These findings demonstrate that the human auditory brainstem is able to encode regularities in the recent auditory past to detect novel events, and confirm the multiple anatomical and temporal scales of human deviance detection.
Collapse
|
47
|
Pantev C, Herholz SC. Plasticity of the human auditory cortex related to musical training. Neurosci Biobehav Rev 2011; 35:2140-54. [PMID: 21763342 DOI: 10.1016/j.neubiorev.2011.06.010] [Citation(s) in RCA: 111] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2010] [Revised: 06/21/2011] [Accepted: 06/24/2011] [Indexed: 11/16/2022]
Affiliation(s)
- Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Malmedyweg 15, Münster, Germany.
| | | |
Collapse
|
48
|
Parbery-Clark A, Strait DL, Kraus N. Context-dependent encoding in the auditory brainstem subserves enhanced speech-in-noise perception in musicians. Neuropsychologia 2011; 49:3338-45. [PMID: 21864552 DOI: 10.1016/j.neuropsychologia.2011.08.007] [Citation(s) in RCA: 76] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2011] [Revised: 07/11/2011] [Accepted: 08/08/2011] [Indexed: 11/18/2022]
Abstract
Musical training strengthens speech perception in the presence of background noise. Given that the ability to make use of speech sound regularities, such as pitch, underlies perceptual acuity in challenging listening environments, we asked whether musicians' enhanced speech-in-noise perception is facilitated by increased neural sensitivity to acoustic regularities. To this aim we examined subcortical encoding of the same speech syllable presented in predictable and variable conditions and speech-in-noise perception in 31 musicians and nonmusicians. We anticipated that musicians would demonstrate greater neural enhancement of speech presented in the predictable compared to the variable condition than nonmusicians. Accordingly, musicians demonstrated more robust neural encoding of the fundamental frequency (i.e., pitch) of speech presented in the predictable relative to the variable condition than nonmusicians. The degree of neural enhancement observed to predictable speech correlated with subjects' musical practice histories as well as with their speech-in-noise perceptual abilities. Taken together, our findings suggest that subcortical sensitivity to speech regularities is shaped by musical training and may contribute to musicians' enhanced speech-in-noise perception.
Collapse
Affiliation(s)
- A Parbery-Clark
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, IL 60208, USA
| | | | | |
Collapse
|
49
|
|
50
|
Anderson S, Kraus N. Neural Encoding of Speech and Music: Implications for Hearing Speech in Noise. Semin Hear 2011; 32:129-141. [PMID: 24748717 PMCID: PMC3989107 DOI: 10.1055/s-0031-1277234] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022] Open
Abstract
Understanding speech in a background of competing noise is challenging, especially for individuals with hearing loss or deficits in auditory processing ability. The ability to hear in background noise cannot be predicted from the audiogram, an assessment of peripheral hearing ability; therefore, it is important to consider the impact of central and cognitive factors on speech-in-noise perception. Auditory processing in complex environments is reflected in neural encoding of pitch, timing, and timbre, the crucial elements of speech and music. Musical expertise in processing pitch, timing, and timbre may transfer to enhancements in speech-in-noise perception due to shared neural pathways for speech and music. Through cognitive-sensory interactions, musicians develop skills enabling them to selectively listen to relevant signals embedded in a network of melodies and harmonies, and this experience leads in turn to enhanced ability to focus on one voice in a background of other voices. Here we review recent work examining the biological mechanisms of speech and music perception and the potential for musical experience to ameliorate speech-in-noise listening difficulties.
Collapse
Affiliation(s)
- Samira Anderson
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois
- Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Northwestern University, Evanston, Illinois
- Department of Communication Sciences, Northwestern University, Evanston, Illinois
- Department of Neurobiology and Physiology, Northwestern University, Evanston, Illinois
- Department of Otolaryngology, Northwestern University, Evanston, Illinois
| |
Collapse
|