1
|
Kopal J, Kumar K, Shafighi K, Saltoun K, Modenato C, Moreau CA, Huguet G, Jean-Louis M, Martin CO, Saci Z, Younis N, Douard E, Jizi K, Beauchamp-Chatel A, Kushan L, Silva AI, van den Bree MBM, Linden DEJ, Owen MJ, Hall J, Lippé S, Draganski B, Sønderby IE, Andreassen OA, Glahn DC, Thompson PM, Bearden CE, Zatorre R, Jacquemont S, Bzdok D. Using rare genetic mutations to revisit structural brain asymmetry. Nat Commun 2024; 15:2639. [PMID: 38531844 PMCID: PMC10966068 DOI: 10.1038/s41467-024-46784-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Accepted: 03/11/2024] [Indexed: 03/28/2024] Open
Abstract
Asymmetry between the left and right hemisphere is a key feature of brain organization. Hemispheric functional specialization underlies some of the most advanced human-defining cognitive operations, such as articulated language, perspective taking, or rapid detection of facial cues. Yet, genetic investigations into brain asymmetry have mostly relied on common variants, which typically exert small effects on brain-related phenotypes. Here, we leverage rare genomic deletions and duplications to study how genetic alterations reverberate in human brain and behavior. We designed a pattern-learning approach to dissect the impact of eight high-effect-size copy number variations (CNVs) on brain asymmetry in a multi-site cohort of 552 CNV carriers and 290 non-carriers. Isolated multivariate brain asymmetry patterns spotlighted regions typically thought to subserve lateralized functions, including language, hearing, as well as visual, face and word recognition. Planum temporale asymmetry emerged as especially susceptible to deletions and duplications of specific gene sets. Targeted analysis of common variants through genome-wide association study (GWAS) consolidated partly diverging genetic influences on the right versus left planum temporale structure. In conclusion, our gene-brain-behavior data fusion highlights the consequences of genetically controlled brain lateralization on uniquely human cognitive capacities.
Collapse
Affiliation(s)
- Jakub Kopal
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Kuldeep Kumar
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Kimia Shafighi
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Karin Saltoun
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada
| | - Claudia Modenato
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | - Clara A Moreau
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Guillaume Huguet
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | | | | | - Zohra Saci
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Nadine Younis
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Elise Douard
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Khadije Jizi
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Alexis Beauchamp-Chatel
- Institut universitaire en santé mentale de Montréal, University of Montréal, Montréal, Canada
- Department of Psychiatry, University of Montreal, Montréal, Canada
| | - Leila Kushan
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Ana I Silva
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
| | - Marianne B M van den Bree
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - David E J Linden
- School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Neuroscience and Mental Health Innovation Institute, Cardiff University, Cardiff, UK
| | - Michael J Owen
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Jeremy Hall
- Centre for Neuropsychiatric Genetics and Genomics, Cardiff University, Cardiff, UK
- Division of Psychological Medicine and Clinical Neurosciences, School of Medicine, Cardiff University, Cardiff, UK
| | - Sarah Lippé
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
| | - Bogdan Draganski
- LREN - Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
- Neurology Department, Max-Planck-Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Ida E Sønderby
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- Department of Medical Genetics, Oslo University Hospital, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - Ole A Andreassen
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital and University of Oslo, Oslo, Norway
- KG Jebsen Centre for Neurodevelopmental Disorders, University of Oslo, Oslo, Norway
| | - David C Glahn
- Department of Psychiatry, Boston Children's Hospital and Harvard Medical School, Boston, MA, USA
| | - Paul M Thompson
- Imaging Genetics Center, Stevens Neuroimaging and Informatics Institute, Keck School of Medicine of USC, Marina del Rey, CA, USA
| | - Carrie E Bearden
- Semel Institute for Neuroscience and Human Behavior, Departments of Psychiatry and Biobehavioral Sciences and Psychology, UCLA, Los Angeles, USA
| | - Robert Zatorre
- International Laboratory for Brain, Music and Sound Research, Montreal, QC, Canada
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada
| | - Sébastien Jacquemont
- Centre de recherche CHU Sainte-Justine, Montréal, Quebec, Canada
- Department of Pediatrics, University of Montréal, Montréal, Quebec, Canada
| | - Danilo Bzdok
- Mila - Québec Artificial Intelligence Institute, Montréal, QC, Canada.
- Department of Biomedical Engineering, Faculty of Medicine, McGill University, Montreal, Canada.
- TheNeuro - Montreal Neurological Institute (MNI), McConnell Brain Imaging Centre, Faculty of Medicine, McGill University, Montreal, QC, Canada.
| |
Collapse
|
2
|
Schüller A, Schilling A, Krauss P, Reichenbach T. The Early Subcortical Response at the Fundamental Frequency of Speech Is Temporally Separated from Later Cortical Contributions. J Cogn Neurosci 2024; 36:475-491. [PMID: 38165737 DOI: 10.1162/jocn_a_02103] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2024]
Abstract
Most parts of speech are voiced, exhibiting a degree of periodicity with a fundamental frequency and many higher harmonics. Some neural populations respond to this temporal fine structure, in particular at the fundamental frequency. This frequency-following response to speech consists of both subcortical and cortical contributions and can be measured through EEG as well as through magnetoencephalography (MEG), although both differ in the aspects of neural activity that they capture: EEG is sensitive to both radial and tangential sources as well as to deep sources, whereas MEG is more restrained to the measurement of tangential and superficial neural activity. EEG responses to continuous speech have shown an early subcortical contribution, at a latency of around 9 msec, in agreement with MEG measurements in response to short speech tokens, whereas MEG responses to continuous speech have not yet revealed such an early component. Here, we analyze MEG responses to long segments of continuous speech. We find an early subcortical response at latencies of 4-11 msec, followed by later right-lateralized cortical activities at delays of 20-58 msec as well as potential subcortical activities. Our results show that the early subcortical component of the FFR to continuous speech can be measured from MEG in populations of participants and that its latency agrees with that measured with EEG. They furthermore show that the early subcortical component is temporally well separated from later cortical contributions, enabling an independent assessment of both components toward further aspects of speech processing.
Collapse
Affiliation(s)
| | | | - Patrick Krauss
- Friedrich-Alexander-Universität Erlangen-Nürnberg
- Universitätsklinikum Erlangen
| | | |
Collapse
|
3
|
McClaskey CM. Neural hyperactivity and altered envelope encoding in the central auditory system: Changes with advanced age and hearing loss. Hear Res 2024; 442:108945. [PMID: 38154191 PMCID: PMC10942735 DOI: 10.1016/j.heares.2023.108945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/04/2023] [Accepted: 12/22/2023] [Indexed: 12/30/2023]
Abstract
Temporal modulations are ubiquitous features of sound signals that are important for auditory perception. The perception of temporal modulations, or temporal processing, is known to decline with aging and hearing loss and negatively impact auditory perception in general and speech recognition specifically. However, neurophysiological literature also provides evidence of exaggerated or enhanced encoding of specifically temporal envelopes in aging and hearing loss, which may arise from changes in inhibitory neurotransmission and neuronal hyperactivity. This review paper describes the physiological changes to the neural encoding of temporal envelopes that have been shown to occur with age and hearing loss and discusses the role of disinhibition and neural hyperactivity in contributing to these changes. Studies in both humans and animal models suggest that aging and hearing loss are associated with stronger neural representations of both periodic amplitude modulation envelopes and of naturalistic speech envelopes, but primarily for low-frequency modulations (<80 Hz). Although the frequency dependence of these results is generally taken as evidence of amplified envelope encoding at the cortex and impoverished encoding at the midbrain and brainstem, there is additional evidence to suggest that exaggerated envelope encoding may also occur subcortically, though only for envelopes with low modulation rates. A better understanding of how temporal envelope encoding is altered in aging and hearing loss, and the contexts in which neural responses are exaggerated/diminished, may aid in the development of interventions, assistive devices, and treatment strategies that work to ameliorate age- and hearing-loss-related auditory perceptual deficits.
Collapse
Affiliation(s)
- Carolyn M McClaskey
- Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave, MSC 550, Charleston, SC 29425, United States.
| |
Collapse
|
4
|
Schüller A, Schilling A, Krauss P, Rampp S, Reichenbach T. Attentional Modulation of the Cortical Contribution to the Frequency-Following Response Evoked by Continuous Speech. J Neurosci 2023; 43:7429-7440. [PMID: 37793908 PMCID: PMC10621774 DOI: 10.1523/jneurosci.1247-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2023] [Revised: 09/07/2023] [Accepted: 09/21/2023] [Indexed: 10/06/2023] Open
Abstract
Selective attention to one of several competing speakers is required for comprehending a target speaker among other voices and for successful communication with them. It moreover has been found to involve the neural tracking of low-frequency speech rhythms in the auditory cortex. Effects of selective attention have also been found in subcortical neural activities, in particular regarding the frequency-following response related to the fundamental frequency of speech (speech-FFR). Recent investigations have, however, shown that the speech-FFR contains cortical contributions as well. It remains unclear whether these are also modulated by selective attention. Here we used magnetoencephalography to assess the attentional modulation of the cortical contributions to the speech-FFR. We presented both male and female participants with two competing speech signals and analyzed the cortical responses during attentional switching between the two speakers. Our findings revealed robust attentional modulation of the cortical contribution to the speech-FFR: the neural responses were higher when the speaker was attended than when they were ignored. We also found that, regardless of attention, a voice with a lower fundamental frequency elicited a larger cortical contribution to the speech-FFR than a voice with a higher fundamental frequency. Our results show that the attentional modulation of the speech-FFR does not only occur subcortically but extends to the auditory cortex as well.SIGNIFICANCE STATEMENT Understanding speech in noise requires attention to a target speaker. One of the speech features that a listener can use to identify a target voice among others and attend it is the fundamental frequency, together with its higher harmonics. The fundamental frequency arises from the opening and closing of the vocal folds and is tracked by high-frequency neural activity in the auditory brainstem and in the cortex. Previous investigations showed that the subcortical neural tracking is modulated by selective attention. Here we show that attention affects the cortical tracking of the fundamental frequency as well: it is stronger when a particular voice is attended than when it is ignored.
Collapse
Affiliation(s)
- Alina Schüller
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Achim Schilling
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Patrick Krauss
- Neuroscience Laboratory, University Hospital Erlangen, 91058 Erlangen, Germany
- Pattern Recognition Lab, Department Computer Science, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| | - Stefan Rampp
- Department of Neurosurgery, University Hospital Erlangen, 91058 Erlangen, Germany
- Department of Neurosurgery, University Hospital Halle (Saale), 06120 Halle (Saale), Germany
- Department of Neuroradiology, University Hospital Erlangen, 91058 Erlangen, Germany
| | - Tobias Reichenbach
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nürnberg, 91054 Erlangen, Germany
| |
Collapse
|
5
|
Lerud KD, Hancock R, Skoe E. A high-density EEG and structural MRI source analysis of the frequency following response to missing fundamental stimuli reveals subcortical and cortical activation to low and high frequency stimuli. Neuroimage 2023; 279:120330. [PMID: 37598815 DOI: 10.1016/j.neuroimage.2023.120330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 07/29/2023] [Accepted: 08/14/2023] [Indexed: 08/22/2023] Open
Abstract
Pitch is a perceptual rather than physical phenomenon, important for spoken language use, musical communication, and other aspects of everyday life. Auditory stimuli can be designed to probe the relationship between perception and physiological responses to pitch-evoking stimuli. One technique for measuring physiological responses to pitch-evoking stimuli is the frequency following response (FFR). The FFR is an electroencephalographic (EEG) response to periodic auditory stimuli. The FFR contains nonlinearities not present in the stimuli, including correlates of the amplitude envelope of the stimulus; however, these nonlinearities remain undercharacterized. The FFR is a composite response reflecting multiple neural and peripheral generators, and their contributions to the scalp-recorded FFR vary in ill-understood ways depending on the electrode montage, stimulus, and imaging technique. The FFR is typically assumed to be generated in the auditory brainstem; there is also evidence both for and against a cortical contribution to the FFR. Here a methodology is used to examine the FFR correlates of pitch and the generators of the FFR to stimuli with different pitches. Stimuli were designed to tease apart biological correlates of pitch and amplitude envelope. FFRs were recorded with 256-electrode EEG nets, in contrast to a typical FFR setup which only contains a single active electrode. Structural MRI scans were obtained for each participant to co-register with the electrode locations and constrain a source localization algorithm. The results of this localization shed light on the generating mechanisms of the FFR, including providing evidence for both cortical and subcortical auditory sources.
Collapse
Affiliation(s)
- Karl D Lerud
- University of Maryland College Park, Institute for Systems Research, 20742, United States of America.
| | - Roeland Hancock
- Yale University, Wu Tsai Institute, 06510, United States of America
| | - Erika Skoe
- University of Connecticut, Department of Speech, Language, and Hearing Sciences, Cognitive Sciences Program, 06269, United States of America
| |
Collapse
|
6
|
Kopal J, Kumar K, Shafighi K, Saltoun K, Modenato C, Moreau CA, Huguet G, Jean-Louis M, Martin CO, Saci Z, Younis N, Douard E, Jizi K, Beauchamp-Chatel A, Kushan L, Silva AI, van den Bree MBM, Linden DEJ, Owen MJ, Hall J, Lippé S, Draganski B, Sønderby IE, Andreassen OA, Glahn DC, Thompson PM, Bearden CE, Zatorre R, Jacquemont S, Bzdok D. Using rare genetic mutations to revisit structural brain asymmetry. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.04.17.537199. [PMID: 37131672 PMCID: PMC10153125 DOI: 10.1101/2023.04.17.537199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Asymmetry between the left and right brain is a key feature of brain organization. Hemispheric functional specialization underlies some of the most advanced human-defining cognitive operations, such as articulated language, perspective taking, or rapid detection of facial cues. Yet, genetic investigations into brain asymmetry have mostly relied on common variant studies, which typically exert small effects on brain phenotypes. Here, we leverage rare genomic deletions and duplications to study how genetic alterations reverberate in human brain and behavior. We quantitatively dissected the impact of eight high-effect-size copy number variations (CNVs) on brain asymmetry in a multi-site cohort of 552 CNV carriers and 290 non-carriers. Isolated multivariate brain asymmetry patterns spotlighted regions typically thought to subserve lateralized functions, including language, hearing, as well as visual, face and word recognition. Planum temporale asymmetry emerged as especially susceptible to deletions and duplications of specific gene sets. Targeted analysis of common variants through genome-wide association study (GWAS) consolidated partly diverging genetic influences on the right versus left planum temporale structure. In conclusion, our gene-brain-behavior mapping highlights the consequences of genetically controlled brain lateralization on human-defining cognitive traits.
Collapse
|
7
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
8
|
Ribas-Prats T, Arenillas-Alcón S, Pérez-Cruz M, Costa-Faidella J, Gómez-Roig MD, Escera C. Speech-Encoding Deficits in Neonates Born Large-for-Gestational Age as Revealed With the Envelope Frequency-Following Response. Ear Hear 2023:00003446-990000000-00115. [PMID: 36759954 DOI: 10.1097/aud.0000000000001330] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/11/2023]
Abstract
OBJECTIVES The present envelope frequency-following response (FFRENV) study aimed at characterizing the neural encoding of the fundamental frequency of speech sounds in neonates born at the higher end of the birth weight continuum (>90th percentile), known as large-for-gestational age (LGA). DESIGN Twenty-five LGA newborns were recruited from the maternity unit of Sant Joan de Déu Barcelona Children's Hospital and paired by age and sex with 25 babies born adequate-for-gestational age (AGA), all from healthy mothers and normal pregnancies. FFRENVs were elicited to the/da/ syllable and recorded while the baby was sleeping in its cradle after a successful universal hearing screening. Neural encoding of the stimulus' envelope of the fundamental frequency (F0ENV) was characterized through the FFRENV spectral amplitude. Relationships between electrophysiological parameters and maternal/neonatal variables that may condition neonatal neurodevelopment were assessed, including pregestational body mass index (BMI), maternal gestational weight gain and neonatal BMI. RESULTS LGA newborns showed smaller spectral amplitudes at the F0ENV compared to the AGA group. Significant negative correlations were found between neonatal BMI and the spectral amplitude at the F0ENV. CONCLUSIONS Our results indicate that in spite of having a healthy pregnancy, LGA neonates' central auditory system is impaired in encoding a fundamental aspect of the speech sounds, namely their fundamental frequency. The negative correlation between the neonates' BMI and FFRENV indicates that this impaired encoding is independent of the pregnant woman BMI and weight gain during pregnancy, supporting the role of the neonatal BMI. We suggest that the higher adipose tissue observed in the LGA group may impair, via proinflammatory products, the fine-grained central auditory system microstructure required for the neural encoding of the fundamental frequency of speech sounds.
Collapse
Affiliation(s)
- Teresa Ribas-Prats
- Brainlab-Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain.,Institute of Neurosciences, University of Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain
| | - Sonia Arenillas-Alcón
- Brainlab-Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain.,Institute of Neurosciences, University of Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain
| | - Míriam Pérez-Cruz
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain.,BCNatal-Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Catalonia, Spain
| | - Jordi Costa-Faidella
- Brainlab-Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain.,Institute of Neurosciences, University of Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain
| | - Maria Dolores Gómez-Roig
- Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain.,BCNatal-Barcelona Center for Maternal Fetal and Neonatal Medicine (Hospital Sant Joan de Déu and Hospital Clínic), University of Barcelona, Barcelona, Catalonia, Spain
| | - Carles Escera
- Brainlab-Cognitive Neuroscience Research Group. Department of Clinical Psychology and Psychobiology, University of Barcelona, Catalonia, Spain.,Institute of Neurosciences, University of Barcelona, Catalonia, Spain.,Institut de Recerca Sant Joan de Déu, Esplugues de Llobregat, Barcelona, Catalonia, Spain
| |
Collapse
|
9
|
Van Der Biest H, Keshishzadeh S, Keppler H, Dhooge I, Verhulst S. Envelope following responses for hearing diagnosis: Robustness and methodological considerations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:191. [PMID: 36732231 DOI: 10.1121/10.0016807] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/14/2022] [Accepted: 12/19/2022] [Indexed: 06/18/2023]
Abstract
Recent studies have found that envelope following responses (EFRs) are a marker of age-related and noise- or ototoxic-induced cochlear synaptopathy (CS) in research animals. Whereas the cochlear injury can be well controlled in animal research studies, humans may have an unknown mixture of sensorineural hearing loss [SNHL; e.g., inner- or outer-hair-cell (OHC) damage or CS] that cannot be teased apart in a standard hearing evaluation. Hence, a direct translation of EFR markers of CS to a differential CS diagnosis in humans might be compromised by the influence of SNHL subtypes and differences in recording modalities between research animals and humans. To quantify the robustness of EFR markers for use in human studies, this study investigates the impact of methodological considerations related to electrode montage, stimulus characteristics, and presentation, as well as analysis method on human-recorded EFR markers. The main focus is on rectangularly modulated pure-tone stimuli to evoke the EFR based on a recent auditory modelling study that showed that the EFR was least affected by OHC damage and most sensitive to CS in this stimulus configuration. The outcomes of this study can help guide future clinical implementations of electroencephalography-based SNHL diagnostic tests.
Collapse
Affiliation(s)
- Heleen Van Der Biest
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| | - Sarineh Keshishzadeh
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences-Audiology, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
| | - Sarah Verhulst
- Hearing Technology at Wireless, Acoustics, Environment and Expert Systems, Department of Information Technology, Ghent, Belgium
| |
Collapse
|
10
|
Price CN, Bidelman GM. Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment. Ann N Y Acad Sci 2022; 1516:114-122. [PMID: 35762658 PMCID: PMC9588638 DOI: 10.1111/nyas.14853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Mild cognitive impairment (MCI) commonly results in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musical experience might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel electroencephalograms (EEGs) in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical experience (0-21 years). Musical experience sharpened temporal precision in auditory cortical responses, suggesting that musical experience produces more efficient processing of acoustic features by counteracting age-related neural delays. Additionally, robustness of brainstem responses predicted the severity of cognitive decline, suggesting that early speech representations are sensitive to preclinical stages of cognitive impairment. Our results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.
Collapse
Affiliation(s)
- Caitlin N. Price
- Department of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
11
|
Gillis M, Van Canneyt J, Francart T, Vanthornhout J. Neural tracking as a diagnostic tool to assess the auditory pathway. Hear Res 2022; 426:108607. [PMID: 36137861 DOI: 10.1016/j.heares.2022.108607] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/26/2021] [Revised: 08/11/2022] [Accepted: 09/12/2022] [Indexed: 11/20/2022]
Abstract
When a person listens to sound, the brain time-locks to specific aspects of the sound. This is called neural tracking and it can be investigated by analysing neural responses (e.g., measured by electroencephalography) to continuous natural speech. Measures of neural tracking allow for an objective investigation of a range of auditory and linguistic processes in the brain during natural speech perception. This approach is more ecologically valid than traditional auditory evoked responses and has great potential for research and clinical applications. This article reviews the neural tracking framework and highlights three prominent examples of neural tracking analyses: neural tracking of the fundamental frequency of the voice (f0), the speech envelope and linguistic features. Each of these analyses provides a unique point of view into the human brain's hierarchical stages of speech processing. F0-tracking assesses the encoding of fine temporal information in the early stages of the auditory pathway, i.e., from the auditory periphery up to early processing in the primary auditory cortex. Envelope tracking reflects bottom-up and top-down speech-related processes in the auditory cortex and is likely necessary but not sufficient for speech intelligibility. Linguistic feature tracking (e.g. word or phoneme surprisal) relates to neural processes more directly related to speech intelligibility. Together these analyses form a multi-faceted objective assessment of an individual's auditory and linguistic processing.
Collapse
Affiliation(s)
- Marlies Gillis
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium.
| | - Jana Van Canneyt
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Tom Francart
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| | - Jonas Vanthornhout
- Experimental Oto-Rhino-Laryngology, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Belgium
| |
Collapse
|
12
|
Kegler M, Weissbart H, Reichenbach T. The neural response at the fundamental frequency of speech is modulated by word-level acoustic and linguistic information. Front Neurosci 2022; 16:915744. [PMID: 35942153 PMCID: PMC9355803 DOI: 10.3389/fnins.2022.915744] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 07/04/2022] [Indexed: 11/21/2022] Open
Abstract
Spoken language comprehension requires rapid and continuous integration of information, from lower-level acoustic to higher-level linguistic features. Much of this processing occurs in the cerebral cortex. Its neural activity exhibits, for instance, correlates of predictive processing, emerging at delays of a few 100 ms. However, the auditory pathways are also characterized by extensive feedback loops from higher-level cortical areas to lower-level ones as well as to subcortical structures. Early neural activity can therefore be influenced by higher-level cognitive processes, but it remains unclear whether such feedback contributes to linguistic processing. Here, we investigated early speech-evoked neural activity that emerges at the fundamental frequency. We analyzed EEG recordings obtained when subjects listened to a story read by a single speaker. We identified a response tracking the speaker's fundamental frequency that occurred at a delay of 11 ms, while another response elicited by the high-frequency modulation of the envelope of higher harmonics exhibited a larger magnitude and longer latency of about 18 ms with an additional significant component at around 40 ms. Notably, while the earlier components of the response likely originate from the subcortical structures, the latter presumably involves contributions from cortical regions. Subsequently, we determined the magnitude of these early neural responses for each individual word in the story. We then quantified the context-independent frequency of each word and used a language model to compute context-dependent word surprisal and precision. The word surprisal represented how predictable a word is, given the previous context, and the word precision reflected the confidence about predicting the next word from the past context. We found that the word-level neural responses at the fundamental frequency were predominantly influenced by the acoustic features: the average fundamental frequency and its variability. Amongst the linguistic features, only context-independent word frequency showed a weak but significant modulation of the neural response to the high-frequency envelope modulation. Our results show that the early neural response at the fundamental frequency is already influenced by acoustic as well as linguistic information, suggesting top-down modulation of this neural response.
Collapse
Affiliation(s)
- Mikolaj Kegler
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
| | - Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, Netherlands
| | - Tobias Reichenbach
- Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, United Kingdom
- Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-University Erlangen-Nuremberg, Erlangen, Germany
- *Correspondence: Tobias Reichenbach
| |
Collapse
|
13
|
Chauvette L, Fournier P, Sharp A. The frequency-following response to assess the neural representation of spectral speech cues in older adults. Hear Res 2022; 418:108486. [DOI: 10.1016/j.heares.2022.108486] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Revised: 03/12/2022] [Accepted: 03/15/2022] [Indexed: 11/04/2022]
|
14
|
Lentz JJ, Humes LE, Kidd GR. Differences in Auditory Perception Between Young and Older Adults When Controlling for Differences in Hearing Loss and Cognition. Trends Hear 2022; 26:23312165211066180. [PMID: 34989641 PMCID: PMC8753078 DOI: 10.1177/23312165211066180] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
This study was designed to examine age effects on various auditory perceptual skills using a large group of listeners (155 adults, 121 aged 60-88 years and 34 aged 18-30 years), while controlling for the factors of hearing loss and working memory (WM). All subjects completed 3 measures of WM, 7 psychoacoustic tasks (24 conditions) and a hearing assessment. Psychophysical measures were selected to tap phenomena thought to be mediated by higher-level auditory function and included modulation detection, modulation detection interference, informational masking (IM), masking level difference (MLD), anisochrony detection, harmonic mistuning, and stream segregation. Principal-components analysis (PCA) was applied to each psychoacoustic test. For 6 of the 7 tasks, a single component represented performance across the multiple stimulus conditions well, whereas the modulation-detection interference (MDI) task required two components to do so. The effect of age was analyzed using a general linear model applied to each psychoacoustic component. Once hearing loss and WM were accounted for as covariates in the analyses, estimated marginal mean thresholds were lower for older adults on tasks based on temporal processing. When evaluated separately, hearing loss led to poorer performance on roughly 1/2 the tasks and declines in WM accounted for poorer performance on 6 of the 8 psychoacoustic components. These results make clear the need to interpret age-group differences in performance on psychoacoustic tasks in light of cognitive declines commonly associated with aging, and point to hearing loss and cognitive declines as negatively influencing auditory perceptual skills.
Collapse
Affiliation(s)
- Jennifer J. Lentz
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Larry E. Humes
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| | - Gary R. Kidd
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
| |
Collapse
|
15
|
Gnanateja GN, Rupp K, Llanos F, Remick M, Pernia M, Sadagopan S, Teichert T, Abel TJ, Chandrasekaran B. Frequency-Following Responses to Speech Sounds Are Highly Conserved across Species and Contain Cortical Contributions. eNeuro 2021; 8:ENEURO.0451-21.2021. [PMID: 34799409 PMCID: PMC8704423 DOI: 10.1523/eneuro.0451-21.2021] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Accepted: 11/02/2021] [Indexed: 11/21/2022] Open
Abstract
Time-varying pitch is a vital cue for human speech perception. Neural processing of time-varying pitch has been extensively assayed using scalp-recorded frequency-following responses (FFRs), an electrophysiological signal thought to reflect integrated phase-locked neural ensemble activity from subcortical auditory areas. Emerging evidence increasingly points to a putative contribution of auditory cortical ensembles to the scalp-recorded FFRs. However, the properties of cortical FFRs and precise characterization of laminar sources are still unclear. Here we used direct human intracortical recordings as well as extracranial and intracranial recordings from macaques and guinea pigs to characterize the properties of cortical sources of FFRs to time-varying pitch patterns. We found robust FFRs in the auditory cortex across all species. We leveraged representational similarity analysis as a translational bridge to characterize similarities between the human and animal models. Laminar recordings in animal models showed FFRs emerging primarily from the thalamorecipient layers of the auditory cortex. FFRs arising from these cortical sources significantly contributed to the scalp-recorded FFRs via volume conduction. Our research paves the way for a wide array of studies to investigate the role of cortical FFRs in auditory perception and plasticity.
Collapse
Affiliation(s)
- G Nike Gnanateja
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Kyle Rupp
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Fernando Llanos
- Department of Linguistics, The University of Texas at Austin, Austin, Texas 78712
| | - Madison Remick
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Marianny Pernia
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Srivatsun Sadagopan
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Neurobiology, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for the Neural Basis of Cognition, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| | - Tobias Teichert
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, Pennsylvania 15213
| | - Taylor J Abel
- Department of Neurological Surgery, UPMC Children's Hospital of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Bharath Chandrasekaran
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15261
| |
Collapse
|
16
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
17
|
Pesnot Lerousseau J, Trébuchon A, Morillon B, Schön D. Frequency Selectivity of Persistent Cortical Oscillatory Responses to Auditory Rhythmic Stimulation. J Neurosci 2021; 41:7991-8006. [PMID: 34301825 PMCID: PMC8460151 DOI: 10.1523/jneurosci.0213-21.2021] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 06/28/2021] [Accepted: 07/01/2021] [Indexed: 11/21/2022] Open
Abstract
Cortical oscillations have been proposed to play a functional role in speech and music perception, attentional selection, and working memory, via the mechanism of neural entrainment. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. We tested the existence of this phenomenon by studying cortical neural oscillations during and after presentation of melodic stimuli in a passive perception paradigm. Melodies were composed of ∼60 and ∼80 Hz tones embedded in a 2.5 Hz stream. Using intracranial and surface recordings in male and female humans, we reveal persistent oscillatory activity in the high-γ band in response to the tones throughout the cortex, well beyond auditory regions. By contrast, in response to the 2.5 Hz stream, no persistent activity in any frequency band was observed. We further show that our data are well captured by a model of damped harmonic oscillator and can be classified into three classes of neural dynamics, with distinct damping properties and eigenfrequencies. This model provides a mechanistic and quantitative explanation of the frequency selectivity of auditory neural entrainment in the human cortex.SIGNIFICANCE STATEMENT It has been proposed that the functional role of cortical oscillations is subtended by a mechanism of entrainment, the synchronization in phase or amplitude of neural oscillations to a periodic stimulation. One of the properties of neural entrainment that is often taken for granted is that its modulatory effect on ongoing oscillations outlasts rhythmic stimulation. Using intracranial and surface recordings of humans passively listening to rhythmic auditory stimuli, we reveal consistent oscillatory responses throughout the cortex, with persistent activity of high-γ oscillations. On the contrary, neural oscillations do not outlast low-frequency acoustic dynamics. We interpret our results as reflecting harmonic oscillator properties, a model ubiquitous in physics but rarely used in neuroscience.
Collapse
Affiliation(s)
| | - Agnès Trébuchon
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
- APHM, Hôpital de la Timone, Service de Neurophysiologie Clinique, Marseille 13005, France
| | - Benjamin Morillon
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| | - Daniele Schön
- Inserm, Inst Neurosci Syst, Aix Marseille Univ, Inserm, INS, Inst Neurosci Syst, Marseille, France
| |
Collapse
|
18
|
Mai G, Howell P. Causal Relationship between the Right Auditory Cortex and Speech-Evoked Envelope-Following Response: Evidence from Combined Transcranial Stimulation and Electroencephalography. Cereb Cortex 2021; 32:1437-1454. [PMID: 34424956 PMCID: PMC8971082 DOI: 10.1093/cercor/bhab298] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 11/27/2022] Open
Abstract
Speech-evoked envelope-following response (EFR) reflects brain encoding of speech periodicity that serves as a biomarker for pitch and speech perception and various auditory and language disorders. Although EFR is thought to originate from the subcortex, recent research illustrated a right-hemispheric cortical contribution to EFR. However, it is unclear whether this contribution is causal. This study aimed to establish this causality by combining transcranial direct current stimulation (tDCS) and measurement of EFR (pre- and post-tDCS) via scalp-recorded electroencephalography. We applied tDCS over the left and right auditory cortices in right-handed normal-hearing participants and examined whether altering cortical excitability via tDCS causes changes in EFR during monaural listening to speech syllables. We showed significant changes in EFR magnitude when tDCS was applied over the right auditory cortex compared with sham stimulation for the listening ear contralateral to the stimulation site. No such effect was found when tDCS was applied over the left auditory cortex. Crucially, we further observed a hemispheric laterality where aftereffect was significantly greater for tDCS applied over the right than the left auditory cortex in the contralateral ear condition. Our finding thus provides the first evidence that validates the causal relationship between the right auditory cortex and EFR.
Collapse
Affiliation(s)
- Guangting Mai
- Hearing Theme, National Institute for Health Research Nottingham Biomedical Research Centre, Nottingham NG1 5DU, UK.,Division of Clinical Neuroscience, School of Medicine, University of Nottingham, Nottingham NG7 2UH, UK.,Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| | - Peter Howell
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK
| |
Collapse
|
19
|
Zhang X, Gong Q. Context-dependent Plasticity and Strength of Subcortical Encoding of Musical Sounds Independently Underlie Pitch Discrimination for Music Melodies. Neuroscience 2021; 472:68-89. [PMID: 34358631 DOI: 10.1016/j.neuroscience.2021.07.032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2021] [Revised: 07/26/2021] [Accepted: 07/27/2021] [Indexed: 10/20/2022]
Abstract
Subcortical auditory nuclei contribute to pitch perception, but how subcortical sound encoding is related to pitch processing for music perception remains unclear. Conventionally, enhanced subcortical sound encoding is considered underlying superior pitch discrimination. However, associations between superior auditory perception and the context-dependent plasticity of subcortical sound encoding are also documented. Here, we explored the subcortical neural correlates to music pitch perception by analyzing frequency-following responses (FFRs) to musical sounds presented in a predictable context and a random context. We found that the FFR inter-trial phase-locking (ITPL) was negatively correlated with behavioral performances of discrimination of pitches in music melodies. It was also negatively correlated with the plasticity indices measuring the variability of FFRs to physically identical sounds between the two contexts. The plasticity indices were consistently positively correlated with pitch discrimination performances, suggesting the subcortical context-dependent plasticity underlying music pitch perception. Moreover, the raw FFR spectral strength was not significantly correlated with pitch discrimination performances. However, it was positively correlated with behavioral performances when the FFR ITPL was controlled by partial correlations, suggesting that the strength of subcortical sound encoding underlies music pitch perception. When the spectral strength was controlled by partial correlations, the negative ITPL-behavioral correlations were maintained. Furthermore, the FFR ITPL, the plasticity indices, and the FFR spectral strength were more correlated with pitch than with rhythm discrimination performances. These findings suggest that the context-dependent plasticity and the strength of subcortical encoding of musical sounds are independently and perhaps specifically associated with pitch perception for music melodies.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China; School of Medicine, Shanghai University, Shanghai, China.
| |
Collapse
|
20
|
Abstract
The perception of sensory events can be enhanced or suppressed by the surrounding spatial and temporal context in ways that facilitate the detection of novel objects and contribute to the perceptual constancy of those objects under variable conditions. In the auditory system, the phenomenon known as auditory enhancement reflects a general principle of contrast enhancement, in which a target sound embedded within a background sound becomes perceptually more salient if the background is presented first by itself. This effect is highly robust, producing an effective enhancement of the target of up to 25 dB (more than two orders of magnitude in intensity), depending on the task. Despite the importance of the effect, neural correlates of auditory contrast enhancement have yet to be identified in humans. Here, we used the auditory steady-state response to probe the neural representation of a target sound under conditions of enhancement. The probe was simultaneously modulated in amplitude with two modulation frequencies to distinguish cortical from subcortical responses. We found robust correlates for neural enhancement in the auditory cortical, but not subcortical, responses. Our findings provide empirical support for a previously unverified theory of auditory enhancement based on neural adaptation of inhibition and point to approaches for improving sensory prostheses for hearing loss, such as hearing aids and cochlear implants.
Collapse
Affiliation(s)
- Anahita H Mehta
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455
| | - Lei Feng
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, MN 55455
| |
Collapse
|
21
|
Van Canneyt J, Wouters J, Francart T. Cortical compensation for hearing loss, but not age, in neural tracking of the fundamental frequency of the voice. J Neurophysiol 2021; 126:791-802. [PMID: 34232756 DOI: 10.1152/jn.00156.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023] Open
Abstract
Auditory processing is affected by advancing age and hearing loss, but the underlying mechanisms are still unclear. We investigated the effects of age and hearing loss on temporal processing of naturalistic stimuli in the auditory system. We used a recently developed objective measure for neural phase-locking to the fundamental frequency of the voice (f0) which uses continuous natural speech as a stimulus, that is, "f0-tracking." The f0-tracking responses from 54 normal-hearing and 14 hearing-impaired adults of varying ages were analyzed. The responses were evoked by a Flemish story with a male talker and contained contributions from both subcortical and cortical sources. Results indicated that advancing age was related to smaller responses with less cortical response contributions. This is consistent with an age-related decrease in neural phase-locking ability at frequencies in the range of the f0, possibly due to decreased inhibition in the auditory system. Conversely, hearing-impaired subjects displayed larger responses compared with age-matched normal-hearing controls. This was due to additional cortical response contributions in the 38- to 50-ms latency range, which were stronger for participants with more severe hearing loss. This is consistent with hearing-loss-induced cortical reorganization and recruitment of additional neural resources to aid in speech perception.NEW & NOTEWORTHY Previous studies disagree on the effects of age and hearing loss on the neurophysiological processing of the fundamental frequency of the voice (f0), in part due to confounding effects. Using a novel electrophysiological technique, natural speech stimuli, and controlled study design, we quantified and disentangled the effects of age and hearing loss on neural f0 processing. We uncovered evidence for underlying neurophysiological mechanisms, including a cortical compensation mechanism for hearing loss, but not for age.
Collapse
Affiliation(s)
| | - Jan Wouters
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Tom Francart
- ExpORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
22
|
Gransier R, Guérit F, Carlyon RP, Wouters J. Frequency following responses and rate change complexes in cochlear implant users. Hear Res 2021; 404:108200. [PMID: 33647574 PMCID: PMC8052190 DOI: 10.1016/j.heares.2021.108200] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/01/2020] [Revised: 01/25/2021] [Accepted: 02/06/2021] [Indexed: 01/05/2023]
Abstract
The upper limit of rate-based pitch perception and rate discrimination can differ substantially across cochlear implant (CI) users. One potential reason for this difference is the presence of a biological limitation on temporal encoding in the electrically-stimulated auditory pathway, which can be inherent to the electrical stimulation itself and/or to the degenerative processes associated with hearing loss. Electrophysiological measures, like the electrically-evoked frequency following response (eFFR) and auditory change complex (eACC), could potentially provide valuable insights in the temporal processing limitations at the level of the brainstem and cortex in the electrically-stimulated auditory pathway. Obtaining these neural responses, free from stimulation artifacts, is challenging, especially when the neural response is phase-locked to the stimulation rate, as is the case for the eFFR. In this study we investigated the feasibility of measuring eFFRs, free from stimulation artifacts, to stimulation rates ranging from 94 to 196 pulses per second (pps) and eACCs to pulse rate changes ranging from 36 to 108%, when stimulating in a monopolar configuration. A high-sampling rate EEG system was used to measure the electrophysiological responses in five CI users, and linear interpolation was applied to remove the stimulation artifacts from the EEG. With this approach, we were able to measure eFFRs for pulse rates up to 162 pps and eACCs to the different rate changes. Our results show that it is feasible to measure electrophysiological responses, free from stimulation artifacts, that could potentially be used as neural correlates for rate and pitch processing in CI users.
Collapse
Affiliation(s)
- Robin Gransier
- KU Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Box 721, Leuven 3000, Belgium.
| | - Franҫois Guérit
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Robert P Carlyon
- Cambridge Hearing Group, MRC Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge CB2 7EF, United Kingdom
| | - Jan Wouters
- KU Leuven, Department of Neurosciences, ExpORL, Herestraat 49, Box 721, Leuven 3000, Belgium
| |
Collapse
|
23
|
Van Canneyt J, Wouters J, Francart T. Neural tracking of the fundamental frequency of the voice: The effect of voice characteristics. Eur J Neurosci 2021; 53:3640-3653. [DOI: 10.1111/ejn.15229] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 03/24/2021] [Accepted: 04/08/2021] [Indexed: 11/26/2022]
Affiliation(s)
| | - Jan Wouters
- ExpORL Department of Neurosciences KU Leuven Leuven Belgium
| | - Tom Francart
- ExpORL Department of Neurosciences KU Leuven Leuven Belgium
| |
Collapse
|
24
|
Encina-Llamas G, Dau T, Epp B. On the use of envelope following responses to estimate peripheral level compression in the auditory system. Sci Rep 2021; 11:6962. [PMID: 33772043 PMCID: PMC7997911 DOI: 10.1038/s41598-021-85850-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 03/08/2021] [Indexed: 12/22/2022] Open
Abstract
Individual estimates of cochlear compression may provide complementary information to traditional audiometric hearing thresholds in disentangling different types of peripheral cochlear damage. Here we investigated the use of the slope of envelope following response (EFR) magnitude-level functions obtained from four simultaneously presented amplitude modulated tones with modulation frequencies of 80-100 Hz as a proxy of peripheral level compression. Compression estimates in individual normal hearing (NH) listeners were consistent with previously reported group-averaged compression estimates based on psychoacoustical and distortion-product oto-acoustic emission (DPOAE) measures in human listeners. They were also similar to basilar membrane (BM) compression values measured invasively in non-human mammals. EFR-based compression estimates in hearing-impaired listeners were less compressive than those for the NH listeners, consistent with a reduction of BM compression. Cochlear compression was also estimated using DPOAEs in the same NH listeners. DPOAE estimates were larger (less compressive) than EFRs estimates, showing no correlation. Despite the numerical concordance between EFR-based compression estimates and group-averaged estimates from other methods, simulations using an auditory nerve (AN) model revealed that compression estimates based on EFRs might be highly influenced by contributions from off-characteristic frequency (CF) neural populations. This compromises the possibility to estimate on-CF (i.e., frequency-specific or "local") peripheral level compression with EFRs.
Collapse
Affiliation(s)
- Gerard Encina-Llamas
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark.
| | - Torsten Dau
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark
| | - Bastian Epp
- Hearing Systems Section, Department of Health Technology, Technical University of Denmark (DTU), 2800, Kongens Lyngby, Denmark
| |
Collapse
|
25
|
Neural generators of the frequency-following response elicited to stimuli of low and high frequency: A magnetoencephalographic (MEG) study. Neuroimage 2021; 231:117866. [PMID: 33592244 DOI: 10.1016/j.neuroimage.2021.117866] [Citation(s) in RCA: 37] [Impact Index Per Article: 12.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 02/08/2021] [Accepted: 02/09/2021] [Indexed: 01/03/2023] Open
Abstract
The frequency-following response (FFR) to periodic complex sounds has gained recent interest in auditory cognitive neuroscience as it captures with great fidelity the tracking accuracy of the periodic sound features in the ascending auditory system. Seminal studies suggested the FFR as a correlate of subcortical sound encoding, yet recent studies aiming to locate its sources challenged this assumption, demonstrating that FFR receives some contribution from the auditory cortex. Based on frequency-specific phase-locking capabilities along the auditory hierarchy, we hypothesized that FFRs to higher frequencies would receive less cortical contribution than those to lower frequencies, hence supporting a major subcortical involvement for these high frequency sounds. Here, we used a magnetoencephalographic (MEG) approach to trace the neural sources of the FFR elicited in healthy adults (N = 19) to low (89 Hz) and high (333 Hz) frequency sounds. FFRs elicited to the high and low frequency sounds were clearly observable on MEG and comparable to those obtained in simultaneous electroencephalographic recordings. Distributed source modeling analyses revealed midbrain, thalamic, and cortical contributions to FFR, arranged in frequency-specific configurations. Our results showed that the main contribution to the high-frequency sound FFR originated in the inferior colliculus and the medial geniculate body of the thalamus, with no significant cortical contribution. In contrast, the low-frequency sound FFR had a major contribution located in the auditory cortices, and also received contributions originating in the midbrain and thalamic structures. These findings support the multiple generator hypothesis of the FFR and are relevant for our understanding of the neural encoding of sounds along the auditory hierarchy, suggesting a hierarchical organization of periodicity encoding.
Collapse
|
26
|
Pino O. A randomized controlled trial (RCT) to explore the effect of audio-visual entrainment among psychological disorders. ACTA BIO-MEDICA : ATENEI PARMENSIS 2021; 92:e2021408. [PMID: 35075067 PMCID: PMC8823583 DOI: 10.23750/abm.v92i6.12089] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/30/2021] [Accepted: 07/31/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND AND AIM Although many mental disorders have relevant proud in neurobiological dysfunctions, most intervention approaches neglect neurophysiological features or use pharmacological intervention alone. Non-invasive Brain-Computer Interfaces (BCIs), providing natural ways of modulating mood states, can be promoted as an alternative intervention to cope with neurobiological dysfunction. METHODS A BCI prototype was proposed to feedback a person's affective state such that a closed-loop interaction between the participant's brain responses and the musical stimuli is established. It feedbacks in real-time flickering lights matching with the individual's brain rhythms undergo to auditory stimuli. A RCT was carried out on 15 individuals of both genders (mean age = 49.27 years) with anxiety and depressive spectrum disorders randomly assigned to 2 groups (experimental vs. active control). RESULTS Outcome measures revealed either a significant decrease in Hamilton Rating Scale for Depression (HAM-D) scores and gains in cognitive functions only for participants who undergone to the experimental treatment. Variability in HAM-D scores seems explained by the changes in beta 1, beta 2 and delta bands. Conversely, the rise in cognitive function scores appear associated with theta variations. CONCLUSIONS Future work needs to validate the relationship proposed here between music and brain responses. Findings of the present study provided support to a range of research examining BCI brain modulation and contributes to the understanding of this technique as instruments to alternative therapies We believe that Neuro-Upper can be used as an effective new tool for investigating affective responses, and emotion regulation (www.actabiomedica.it).
Collapse
Affiliation(s)
- Olimpia Pino
- University of Parma, Department of Medicine & Surgery, Neuroscience Unit.
| |
Collapse
|
27
|
Kessler DM, Ananthakrishnan S, Smith SB, D'Onofrio K, Gifford RH. Frequency Following Response and Speech Recognition Benefit for Combining a Cochlear Implant and Contralateral Hearing Aid. Trends Hear 2020; 24:2331216520902001. [PMID: 32003296 PMCID: PMC7257083 DOI: 10.1177/2331216520902001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022] Open
Abstract
Multiple studies have shown significant speech recognition benefit when acoustic hearing is combined with a cochlear implant (CI) for a bimodal hearing configuration. However, this benefit varies greatly between individuals. There are few clinical measures correlated with bimodal benefit and those correlations are driven by extreme values prohibiting data-driven, clinical counseling. This study evaluated the relationship between neural representation of fundamental frequency (F0) and temporal fine structure via the frequency following response (FFR) in the nonimplanted ear as well as spectral and temporal resolution of the nonimplanted ear and bimodal benefit for speech recognition in quiet and noise. Participants included 14 unilateral CI users who wore a hearing aid (HA) in the nonimplanted ear. Testing included speech recognition in quiet and in noise with the HA-alone, CI-alone, and in the bimodal condition (i.e., CI + HA), measures of spectral and temporal resolution in the nonimplanted ear, and FFR recording for a 170-ms/da/stimulus in the nonimplanted ear. Even after controlling for four-frequency pure-tone average, there was a significant correlation (r = .83) between FFR F0 amplitude in the nonimplanted ear and bimodal benefit. Other measures of auditory function of the nonimplanted ear were not significantly correlated with bimodal benefit. The FFR holds potential as an objective tool that may allow data-driven counseling regarding expected benefit from the nonimplanted ear. It is possible that this information may eventually be used for clinical decision-making, particularly in difficult-to-test populations such as young children, regarding effectiveness of bimodal hearing versus bilateral CI candidacy.
Collapse
Affiliation(s)
- David M Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | | | - Spencer B Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, TX, USA
| | - Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.,Department of Otolaryngology, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
28
|
White-Schwoch T, Krizman J, Nicol T, Kraus N. Case studies in neuroscience: cortical contributions to the frequency-following response depend on subcortical synchrony. J Neurophysiol 2020; 125:273-281. [PMID: 33206575 DOI: 10.1152/jn.00104.2020] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023] Open
Abstract
Frequency-following responses to musical notes spanning the octave 65-130 Hz were elicited in a person with auditory neuropathy, a disorder of subcortical neural synchrony, and a control subject. No phaselocked responses were observed in the person with auditory neuropathy. The control subject had robust responses synchronized to the fundamental frequency and its harmonics. Cortical onset responses to each note in the series were present in both subjects. These results support the hypothesis that subcortical neural synchrony is necessary to generate the frequency-following response-including for stimulus frequencies at which a cortical contribution has been noted. Although auditory cortex ensembles may synchronize to fundamental frequency cues in speech and music, subcortical neural synchrony appears to be a necessary antecedent.NEW & NOTEWORTHY A listener with auditory neuropathy, an absence of subcortical neural synchrony, did not have electrophysiological frequency-following responses synchronized to an octave of musical notes, with fundamental frequencies ranging from 65 to 130 Hz. A control subject had robust responses that phaselocked to each note. Although auditory cortex may contribute to the scalp-recorded frequency-following response in healthy listeners, our results suggest this phenomenon depends on subcortical neural synchrony.
Collapse
Affiliation(s)
- Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois.,Departments of Neurobiology and Otolaryngology, Northwestern University, Evanston, Illinois
| |
Collapse
|
29
|
Kulasingham JP, Brodbeck C, Presacco A, Kuchinsky SE, Anderson S, Simon JZ. High gamma cortical processing of continuous speech in younger and older listeners. Neuroimage 2020; 222:117291. [PMID: 32835821 PMCID: PMC7736126 DOI: 10.1016/j.neuroimage.2020.117291] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 08/12/2020] [Accepted: 08/16/2020] [Indexed: 12/11/2022] Open
Abstract
Neural processing along the ascending auditory pathway is often associated with a progressive reduction in characteristic processing rates. For instance, the well-known frequency-following response (FFR) of the auditory midbrain, as measured with electroencephalography (EEG), is dominated by frequencies from ∼100 Hz to several hundred Hz, phase-locking to the acoustic stimulus at those frequencies. In contrast, cortical responses, whether measured by EEG or magnetoencephalography (MEG), are typically characterized by frequencies of a few Hz to a few tens of Hz, time-locking to acoustic envelope features. In this study we investigated a crossover case, cortically generated responses time-locked to continuous speech features at FFR-like rates. Using MEG, we analyzed responses in the high gamma range of 70-200 Hz to continuous speech using neural source-localized reverse correlation and the corresponding temporal response functions (TRFs). Continuous speech stimuli were presented to 40 subjects (17 younger, 23 older adults) with clinically normal hearing and their MEG responses were analyzed in the 70-200 Hz band. Consistent with the relative insensitivity of MEG to many subcortical structures, the spatiotemporal profile of these response components indicated a cortical origin with ∼40 ms peak latency and a right hemisphere bias. TRF analysis was performed using two separate aspects of the speech stimuli: a) the 70-200 Hz carrier of the speech, and b) the 70-200 Hz temporal modulations in the spectral envelope of the speech stimulus. The response was dominantly driven by the envelope modulation, with a much weaker contribution from the carrier. Age-related differences were also analyzed to investigate a reversal previously seen along the ascending auditory pathway, whereby older listeners show weaker midbrain FFR responses than younger listeners, but, paradoxically, have stronger cortical low frequency responses. In contrast to both these earlier results, this study did not find clear age-related differences in high gamma cortical responses to continuous speech. Cortical responses at FFR-like frequencies shared some properties with midbrain responses at the same frequencies and with cortical responses at much lower frequencies.
Collapse
Affiliation(s)
- Joshua P Kulasingham
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States.
| | - Christian Brodbeck
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Alessandro Presacco
- (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States.
| | - Stefanie E Kuchinsky
- (c)Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland, United States.
| | - Samira Anderson
- (d)Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, United States.
| | - Jonathan Z Simon
- (a)Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, United States; (b)Institute for Systems Research, University of Maryland, College Park, Maryland, United States; (e)Department of Biology, University of Maryland, College Park, Maryland, United States.
| |
Collapse
|
30
|
Speech frequency-following response in human auditory cortex is more than a simple tracking. Neuroimage 2020; 226:117545. [PMID: 33186711 DOI: 10.1016/j.neuroimage.2020.117545] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2020] [Revised: 10/29/2020] [Accepted: 11/02/2020] [Indexed: 11/20/2022] Open
Abstract
The human auditory cortex is recently found to contribute to the frequency following response (FFR) and the cortical component has been shown to be more relevant to speech perception. However, it is not clear how cortical FFR may contribute to the processing of speech fundamental frequency (F0) and the dynamic pitch. Using intracranial EEG recordings, we observed a significant FFR at the fundamental frequency (F0) for both speech and speech-like harmonic complex stimuli in the human auditory cortex, even in the missing fundamental condition. Both the spectral amplitude and phase coherence of the cortical FFR showed a significant harmonic preference, and attenuated from the primary auditory cortex to the surrounding associative auditory cortex. The phase coherence of the speech FFR was found significantly higher than that of the harmonic complex stimuli, especially in the left hemisphere, showing a high timing fidelity of the cortical FFR in tracking dynamic F0 in speech. Spectrally, the frequency band of the cortical FFR was largely overlapped with the range of the human vocal pitch. Taken together, our study parsed the intrinsic properties of the cortical FFR and reveals a preference for speech-like sounds, supporting its potential role in processing speech intonation and lexical tones.
Collapse
|
31
|
Meter enhances the subcortical processing of speech sounds at a strong beat. Sci Rep 2020; 10:15973. [PMID: 32994430 PMCID: PMC7525485 DOI: 10.1038/s41598-020-72714-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 09/07/2020] [Indexed: 11/08/2022] Open
Abstract
The temporal structure of sound such as in music and speech increases the efficiency of auditory processing by providing listeners with a predictable context. Musical meter is a good example of a sound structure that is temporally organized in a hierarchical manner, with recent studies showing that meter optimizes neural processing, particularly for sounds located at a higher metrical position or strong beat. Whereas enhanced cortical auditory processing at times of high metric strength has been studied, there is to date no direct evidence showing metrical modulation of subcortical processing. In this work, we examined the effect of meter on the subcortical encoding of sounds by measuring human auditory frequency-following responses to speech presented at four different metrical positions. Results show that neural encoding of the fundamental frequency of the vowel was enhanced at the strong beat, and also that the neural consistency of the vowel was the highest at the strong beat. When comparing musicians to non-musicians, musicians were found, at the strong beat, to selectively enhance the behaviorally relevant component of the speech sound, namely the formant frequency of the transient part. Our findings indicate that the meter of sound influences subcortical processing, and this metrical modulation differs depending on musical expertise.
Collapse
|
32
|
Saiz-Alía M, Reichenbach T. Computational modeling of the auditory brainstem response to continuous speech. J Neural Eng 2020; 17:036035. [DOI: 10.1088/1741-2552/ab970d] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
|
33
|
Van Canneyt J, Wouters J, Francart T. From modulated noise to natural speech: The effect of stimulus parameters on the envelope following response. Hear Res 2020; 393:107993. [PMID: 32535277 DOI: 10.1016/j.heares.2020.107993] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 04/28/2020] [Accepted: 05/04/2020] [Indexed: 11/28/2022]
Abstract
Envelope following responses (EFRs) can be evoked by a wide range of auditory stimuli, but for many stimulus parameters the effect on EFR strength is not fully understood. This complicates the comparison of earlier studies and the design of new studies. Furthermore, the most optimal stimulus parameters are unknown. To help resolve this issue, we investigated the effects of four important stimulus parameters and their interactions on the EFR. Responses were measured in 16 normal hearing subjects evoked by stimuli with four levels of stimulus complexity (amplitude modulated noise, artificial vowels, natural vowels and vowel-consonant-vowel combinations), three fundamental frequencies (105 Hz, 185 Hz and 245 Hz), three fundamental frequency contours (upward sweeping, downward sweeping and flat) and three vowel identities (Flemish /a:/, /u:/, and /i:/). We found that EFRs evoked by artificial vowels were on average 4-6 dB SNR larger than responses evoked by the other stimulus complexities, probably because of (unnaturally) strong higher harmonics. Moreover, response amplitude decreased with fundamental frequency but response SNR remained largely unaffected. Thirdly, fundamental frequency variation within the stimulus did not impact EFR strength, but only when rate of change remained low (e.g. not the case for sweeping natural vowels). Finally, the vowel /i:/ appeared to evoke larger response amplitudes compared to /a:/ and /u:/, but analysis power was too small to confirm this statistically. Vowel-dependent differences in response strength have been suggested to stem from destructive interference between response components. We show how a model of the auditory periphery can simulate these interference patterns and predict response strength. Altogether, the results of this study can guide stimulus choice for future EFR research and practical applications.
Collapse
Affiliation(s)
- Jana Van Canneyt
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| | - Jan Wouters
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| | - Tom Francart
- ExpORL, Dept. of Neurosciences, KU Leuven, Herestraat 49 Bus 721, 3000, Leuven, Belgium.
| |
Collapse
|
34
|
Richard C, Neel ML, Jeanvoine A, Connell SM, Gehred A, Maitre NL. Characteristics of the Frequency-Following Response to Speech in Neonates and Potential Applicability in Clinical Practice: A Systematic Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1618-1635. [PMID: 32407639 DOI: 10.1044/2020_jslhr-19-00322] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose We sought to critically analyze and evaluate published evidence regarding feasibility and clinical potential for predicting neurodevelopmental outcomes of the frequency-following responses (FFRs) to speech recordings in neonates (birth to 28 days). Method A systematic search of MeSH terms in the Cumulative Index to Nursing and Allied HealthLiterature, Embase, Google Scholar, Ovid Medline (R) and E-Pub Ahead of Print, In-Process & Other Non-Indexed Citations and Daily, Web of Science, SCOPUS, COCHRANE Library, and ClinicalTrials.gov was performed. Manual review of all items identified in the search was performed by two independent reviewers. Articles were evaluated based on the level of methodological quality and evidence according to the RTI item bank. Results Seven articles met inclusion criteria. None of the included studies reported neurodevelopmental outcomes past 3 months of age. Quality of the evidence ranged from moderate to high. Protocol variations were frequent. Conclusions Based on this systematic review, the FFR to speech can capture both temporal and spectral acoustic features in neonates. It can accurately be recorded in a fast and easy manner at the infant's bedside. However, at this time, further studies are needed to identify and validate which FFR features could be incorporated as an addition to standard evaluation of infant sound processing evaluation in subcortico-cortical networks. This review identifies the need for further research focused on identifying specific features of the neonatal FFRs, those with predictive value for early childhood outcomes to help guide targeted early speech and hearing interventions.
Collapse
Affiliation(s)
- Céline Richard
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Laboratory for Investigative Neurophysiology, Department of Radiology and Department of Clinical Neurosciences, University Hospital Center and University of Lausanne, Switzerland
| | - Mary Lauren Neel
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Arnaud Jeanvoine
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Sharon Mc Connell
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
| | - Alison Gehred
- Medical Library Division, Nationwide Children's Hospital, Columbus, OH
| | - Nathalie L Maitre
- Center for Perinatal Research and Department of Pediatrics, Nationwide Children's Hospital, Columbus, OH
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
35
|
De Vos A, Vanvooren S, Ghesquière P, Wouters J. Subcortical auditory neural synchronization is deficient in pre-reading children who develop dyslexia. Dev Sci 2020; 23:e12945. [PMID: 32034978 DOI: 10.1111/desc.12945] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Revised: 02/03/2020] [Accepted: 02/04/2020] [Indexed: 01/19/2023]
Abstract
Auditory processing of temporal information in speech is sustained by synchronized firing of neurons along the entire auditory pathway. In school-aged children and adults with dyslexia, neural synchronization deficits have been found at cortical levels of the auditory system, however, these deficits do not appear to be present in pre-reading children. An alternative role for subcortical synchronization in reading development and dyslexia has been suggested, but remains debated. By means of a longitudinal study, we assessed cognitive reading-related skills and subcortical auditory steady-state responses (80 Hz ASSRs) in a group of children before formal reading instruction (pre-reading), after 1 year of formal reading instruction (beginning reading), and after 3 years of formal reading instruction (more advanced reading). Children were retrospectively classified into three groups based on family risk and literacy achievement: typically developing children without a family risk for dyslexia, typically developing children with a family risk for dyslexia, and children who developed dyslexia. Our results reveal that children who developed dyslexia demonstrate decreased 80 Hz ASSRs at the pre-reading stage. This effect is no longer present after the onset of reading instruction, due to an atypical developmental increase in 80 Hz ASSRs between the pre-reading and the beginning reading stage. A forward stepwise logistic regression analysis showed that literacy achievement was predictable with an accuracy of 90.4% based on a model including three significant predictors, that is, family risk for dyslexia (R = .31), phonological awareness (R = .23), and 80 Hz ASSRs (R = .26). Given that (1) abnormalities in subcortical ASSRs preceded reading acquisition in children who developed dyslexia and (2) subcortical ASSRs contributed to the prediction of literacy achievement, subcortical auditory synchronization deficits may constitute a pre-reading risk factor in the emergence of dyslexia.
Collapse
Affiliation(s)
- Astrid De Vos
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Sophie Vanvooren
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium.,Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Pol Ghesquière
- Parenting and Special Education Research Unit, Faculty of Psychology and Educational Sciences, KU Leuven - University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
36
|
Di Liberto GM, Pelofi C, Bianco R, Patel P, Mehta AD, Herrero JL, de Cheveigné A, Shamma S, Mesgarani N. Cortical encoding of melodic expectations in human temporal cortex. eLife 2020; 9:e51784. [PMID: 32122465 PMCID: PMC7053998 DOI: 10.7554/elife.51784] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2019] [Accepted: 01/20/2020] [Indexed: 01/14/2023] Open
Abstract
Humans engagement in music rests on underlying elements such as the listeners' cultural background and interest in music. These factors modulate how listeners anticipate musical events, a process inducing instantaneous neural responses as the music confronts these expectations. Measuring such neural correlates would represent a direct window into high-level brain processing. Here we recorded cortical signals as participants listened to Bach melodies. We assessed the relative contributions of acoustic versus melodic components of the music to the neural signal. Melodic features included information on pitch progressions and their tempo, which were extracted from a predictive model of musical structure based on Markov chains. We related the music to brain activity with temporal response functions demonstrating, for the first time, distinct cortical encoding of pitch and note-onset expectations during naturalistic music listening. This encoding was most pronounced at response latencies up to 350 ms, and in both planum temporale and Heschl's gyrus.
Collapse
Affiliation(s)
- Giovanni M Di Liberto
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
| | - Claire Pelofi
- Department of Psychology, New York UniversityNew YorkUnited States
- Institut de Neurosciences des Système, UMR S 1106, INSERM, Aix Marseille UniversitéMarseilleFrance
| | | | - Prachi Patel
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| | - Ashesh D Mehta
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Jose L Herrero
- Department of Neurosurgery, Zucker School of Medicine at Hofstra/NorthwellManhassetUnited States
- Feinstein Institute of Medical Research, Northwell HealthManhassetUnited States
| | - Alain de Cheveigné
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- UCL Ear InstituteLondonUnited Kingdom
| | - Shihab Shamma
- Laboratoire des systèmes perceptifs, Département d’études cognitives, École normale supérieure, PSL University, CNRS75005 ParisFrance
- Institute for Systems Research, Electrical and Computer Engineering, University of MarylandCollege ParkUnited States
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
- Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia UniversityNew YorkUnited States
| |
Collapse
|
37
|
Luo L, Xu N, Wang Q, Li L. Disparity in interaural time difference improves the accuracy of neural representations of individual concurrent narrowband sounds in rat inferior colliculus and auditory cortex. J Neurophysiol 2020; 123:695-706. [PMID: 31891521 DOI: 10.1152/jn.00284.2019] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The central mechanisms underlying binaural unmasking for spectrally overlapping concurrent sounds, which are unresolved in the peripheral auditory system, remain largely unknown. In this study, frequency-following responses (FFRs) to two binaurally presented independent narrowband noises (NBNs) with overlapping spectra were recorded simultaneously in the inferior colliculus (IC) and auditory cortex (AC) in anesthetized rats. The results showed that for both IC FFRs and AC FFRs, introducing an interaural time difference (ITD) disparity between the two concurrent NBNs enhanced the representation fidelity, reflected by the increased coherence between the responses evoked by double-NBN stimulation and the responses evoked by single NBNs. The ITD disparity effect varied across frequency bands, being more marked for higher frequency bands in the IC and lower frequency bands in the AC. Moreover, the coherence between IC responses and AC responses was also enhanced by the ITD disparity, and the enhancement was most prominent for low-frequency bands and the IC and the AC on the same side. These results suggest a critical role of the ITD cue in the neural segregation of spectrotemporally overlapping sounds.NEW & NOTEWORTHY When two spectrally overlapped narrowband noises are presented at the same time with the same sound-pressure level, they mask each other. Introducing a disparity in interaural time difference between these two narrowband noises improves the accuracy of the neural representation of individual sounds in both the inferior colliculus and the auditory cortex. The lower frequency signal transformation from the inferior colliculus to the auditory cortex on the same side is also enhanced, showing the effect of binaural unmasking.
Collapse
Affiliation(s)
- Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, China.,Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, China.,Beijing Institute for Brain Disorders, Beijing, China
| |
Collapse
|
38
|
BinKhamis G, Elia Forte A, Reichenbach T, O'Driscoll M, Kluk K. Speech Auditory Brainstem Responses in Adult Hearing Aid Users: Effects of Aiding and Background Noise, and Prediction of Behavioral Measures. Trends Hear 2019; 23:2331216519848297. [PMID: 31264513 PMCID: PMC6607564 DOI: 10.1177/2331216519848297] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Evaluation of patients who are unable to provide behavioral responses on standard clinical measures is challenging due to the lack of standard objective (non-behavioral) clinical audiological measures that assess the outcome of an intervention (e.g., hearing aids). Brainstem responses to short consonant-vowel stimuli (speech-auditory brainstem responses [speech-ABRs]) have been proposed as a measure of subcortical encoding of speech, speech detection, and speech-in-noise performance in individuals with normal hearing. Here, we investigated the potential application of speech-ABRs as an objective clinical outcome measure of speech detection, speech-in-noise detection and recognition, and self-reported speech understanding in 98 adults with sensorineural hearing loss. We compared aided and unaided speech-ABRs, and speech-ABRs in quiet and in noise. In addition, we evaluated whether speech-ABR F0 encoding (obtained from the complex cross-correlation with the 40 ms [da] fundamental waveform) predicted aided behavioral speech recognition in noise or aided self-reported speech understanding. Results showed that (a) aided speech-ABRs had earlier peak latencies, larger peak amplitudes, and larger F0 encoding amplitudes compared to unaided speech-ABRs; (b) the addition of background noise resulted in later F0 encoding latencies but did not have an effect on peak latencies and amplitudes or on F0 encoding amplitudes; and (c) speech-ABRs were not a significant predictor of any of the behavioral or self-report measures. These results show that speech-ABR F0 encoding is not a good predictor of speech-in-noise recognition or self-reported speech understanding with hearing aids. However, our results suggest that speech-ABRs may have potential for clinical application as an objective measure of speech detection with hearing aids.
Collapse
Affiliation(s)
- Ghada BinKhamis
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,2 Department of Communication and Swallowing Disorders, King Fahad Medical City, Riyadh, Saudi Arabia
| | - Antonio Elia Forte
- 3 John A. Paulson School of Engineering and Applied Sciences, Harvard University, Cambridge, MA, USA
| | - Tobias Reichenbach
- 4 Department of Bioengineering, Centre for Neurotechnology, Imperial College London, London, UK
| | - Martin O'Driscoll
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK.,5 Manchester Auditory Implant Centre, Manchester University Hospitals NHS Foundation Trust, Manchester, UK
| | - Karolina Kluk
- 1 Manchester Centre for Audiology and Deafness, School of Health Sciences, Faculty of Biology, Medicine and Health, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
| |
Collapse
|
39
|
Kamerer AM, AuBuchon A, Fultz SE, Kopun JG, Neely ST, Rasetshwane DM. The Role of Cognition in Common Measures of Peripheral Synaptopathy and Hidden Hearing Loss. Am J Audiol 2019; 28:843-856. [PMID: 31647880 DOI: 10.1044/2019_aja-19-0063] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The aim of this study was to quantify the portion of variance in several measures suggested to be indicative of peripheral noise-induced cochlear synaptopathy and hidden hearing disorder that can be attributed to individual cognitive capacity. Method Regression and relative importance analysis was used to model several behavioral and physiological measures of hearing in 32 adults ranging in age from 20 to 74 years. Predictors for the model were hearing sensitivity and performance on a number of cognitive tasks. Results There was a significant influence of cognitive capacity on several measures of cochlear synaptopathy and hidden hearing disorder. These measures include frequency modulation detection threshold, time-compressed word recognition in quiet and reverberation, and the strength of the frequency-following response of the speech-evoked auditory brainstem response. Conclusions Measures of hearing that involve temporal processing are significantly influenced by cognitive abilities, specifically, short-term and working memory capacity, executive function, and attention. Research using measures of temporal processing to diagnose peripheral disorders, such as noise-induced synaptopathy, need to consider cognitive influence even in a young, healthy population.
Collapse
Affiliation(s)
- Aryn M. Kamerer
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Angela AuBuchon
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Sara E. Fultz
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Judy G. Kopun
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Stephen T. Neely
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | | |
Collapse
|
40
|
Rosenthal MA. A systematic review of the voice-tagging hypothesis of speech-in-noise perception. Neuropsychologia 2019; 136:107256. [PMID: 31715197 DOI: 10.1016/j.neuropsychologia.2019.107256] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Revised: 11/03/2019] [Accepted: 11/06/2019] [Indexed: 01/05/2023]
Abstract
The voice-tagging hypothesis claims that individuals who better represent pitch information in a speaker's voice, as measured with the frequency following response (FFR), will be better at speech-in-noise perception. The hypothesis has been provided to explain how music training might improve speech-in-noise perception. This paper reviews studies that are relevant to the voice-tagging hypothesis, including studies on musicians and nonmusicians. Most studies on musicians show greater f0 amplitude compared to controls. Most studies on nonmusicians do not show group differences in f0 amplitude. Across all studies reviewed, f0 amplitude does not consistently predict accuracy in speech-in-noise perception. The evidence suggests that music training does not improve speech-in-noise perception via enhanced subcortical representation of the f0.
Collapse
Affiliation(s)
- Matthew A Rosenthal
- University of Kansas, 1450 Jayhawk Blvd, Lawrence, KS, 66045, Department of Psychology, United States.
| |
Collapse
|
41
|
Coffey EBJ, Nicol T, White-Schwoch T, Chandrasekaran B, Krizman J, Skoe E, Zatorre RJ, Kraus N. Evolving perspectives on the sources of the frequency-following response. Nat Commun 2019; 10:5036. [PMID: 31695046 PMCID: PMC6834633 DOI: 10.1038/s41467-019-13003-w] [Citation(s) in RCA: 103] [Impact Index Per Article: 20.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Accepted: 10/14/2019] [Indexed: 11/09/2022] Open
Abstract
The auditory frequency-following response (FFR) is a non-invasive index of the fidelity of sound encoding in the brain, and is used to study the integrity, plasticity, and behavioral relevance of the neural encoding of sound. In this Perspective, we review recent evidence suggesting that, in humans, the FFR arises from multiple cortical and subcortical sources, not just subcortically as previously believed, and we illustrate how the FFR to complex sounds can enhance the wider field of auditory neuroscience. Far from being of use only to study basic auditory processes, the FFR is an uncommonly multifaceted response yielding a wealth of information, with much yet to be tapped.
Collapse
Affiliation(s)
- Emily B J Coffey
- Department of Psychology, Concordia University, 1455 Boulevard de Maisonneuve Ouest, Montréal, QC, H3G 1M8, Canada.
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada.
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada.
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Bharath Chandrasekaran
- Communication Sciences and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Forbes Tower, 3600 Atwood St, Pittsburgh, PA, 15260, USA
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
| | - Erika Skoe
- Department of Speech, Language, and Hearing Sciences, The Connecticut Institute for the Brain and Cognitive Sciences, University of Connecticut, 2 Alethia Drive, Unit 1085, Storrs, CT, 06269, USA
| | - Robert J Zatorre
- International Laboratory for Brain, Music, and Sound Research (BRAMS), Montréal, QC, Canada
- Centre for Research on Brain, Language and Music (CRBLM), McGill University, 3640 de la Montagne, Montréal, QC, H3G 2A8, Canada
- Montreal Neurological Institute, McGill University, 3801 rue Université, Montréal, QC, H3A 2B4, Canada
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, 2240 Campus Dr., Evanston, IL, 60208, USA
- Department of Neurobiology, Northwestern University, 2205 Tech Dr., Evanston, IL, 60208, USA
- Department of Otolaryngology, Northwestern University, 420 E Superior St., Chicago, IL, 6011, USA
| |
Collapse
|
42
|
Zan P, Presacco A, Anderson S, Simon JZ. Mutual information analysis of neural representations of speech in noise in the aging midbrain. J Neurophysiol 2019; 122:2372-2387. [PMID: 31596649 DOI: 10.1152/jn.00270.2019] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Younger adults with normal hearing can typically understand speech in the presence of a competing speaker without much effort, but this ability to understand speech in challenging conditions deteriorates with age. Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Earlier auditory studies using the frequency-following response (FFR), primarily believed to be generated by the midbrain, demonstrated age-related neural deficits when analyzed with traditional measures. Here we use a mutual information paradigm to analyze the FFR to speech (masked by a competing speech signal) by estimating the amount of stimulus information contained in the FFR. Our results show, first, a broadband informational loss associated with aging for both FFR amplitude and phase. Second, this age-related loss of information is more severe in higher-frequency FFR bands (several hundred hertz). Third, the mutual information between the FFR and the stimulus decreases as noise level increases for both age groups. Fourth, older adults benefit neurally, i.e., show a reduction in loss of information, when the speech masker is changed from meaningful (talker speaking a language that they can comprehend, such as English) to meaningless (talker speaking a language that they cannot comprehend, such as Dutch). This benefit is not seen in younger listeners, which suggests that age-related informational loss may be more severe when the speech masker is meaningful than when it is meaningless. In summary, as a method, mutual information analysis can unveil new results that traditional measures may not have enough statistical power to assess.NEW & NOTEWORTHY Older adults, even with clinically normal hearing, often have problems understanding speech in noise. Auditory studies using the frequency-following response (FFR) have demonstrated age-related neural deficits with traditional methods. Here we use a mutual information paradigm to analyze the FFR to speech masked by competing speech. Results confirm those from traditional analysis but additionally show that older adults benefit neurally when the masker changes from a language that they comprehend to a language they cannot.
Collapse
Affiliation(s)
- Peng Zan
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland
| | - Alessandro Presacco
- Institute for Systems Research, University of Maryland, College Park, Maryland
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland.,Institute for Systems Research, University of Maryland, College Park, Maryland.,Department of Biology, University of Maryland, College Park, Maryland
| |
Collapse
|
43
|
White-Schwoch T, Anderson S, Krizman J, Nicol T, Kraus N. Case studies in neuroscience: subcortical origins of the frequency-following response. J Neurophysiol 2019; 122:844-848. [DOI: 10.1152/jn.00112.2019] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The auditory frequency-following response (FFR) reflects synchronized and phase-locked activity along the auditory pathway in response to sound. Although FFRs were historically thought to reflect subcortical activity, recent evidence suggests an auditory cortex contribution as well. Here we present electrophysiological evidence for the FFR’s origins from two cases: a patient with bilateral auditory cortex lesions and a patient with auditory neuropathy, a condition of subcortical origin. The patient with auditory cortex lesions had robust and replicable FFRs, but no cortical responses. In contrast, the patient with auditory neuropathy had no FFR despite robust and replicable cortical responses. This double dissociation shows that subcortical synchrony is necessary and sufficient to generate an FFR. NEW & NOTEWORTHY The frequency-following response (FFR) reflects synchronized and phase-locked neural activity in response to sound. The authors present a dual case study, comparing FFRs and cortical potentials between a patient with auditory neuropathy (a condition of subcortical origin) and a patient with bilateral auditory cortex lesions. They show that subcortical synchrony is necessary and sufficient to generate an FFR.
Collapse
Affiliation(s)
- Travis White-Schwoch
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Samira Anderson
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland
| | - Jennifer Krizman
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Trent Nicol
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
| | - Nina Kraus
- Auditory Neuroscience Laboratory, Department of Communication Sciences, Northwestern University, Evanston, Illinois
- Department of Neurobiology, Northwestern University, Evanston, Illinois
| |
Collapse
|
44
|
Llanos F, Xie Z, Chandrasekaran B. Biometric identification of listener identity from frequency following responses to speech. J Neural Eng 2019; 16:056004. [PMID: 31039552 DOI: 10.1088/1741-2552/ab1e01] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVE We investigate the biometric specificity of the frequency following response (FFR), an EEG marker of early auditory processing that reflects phase-locked activity from neural ensembles in the auditory cortex and subcortex (Chandrasekaran and Kraus 2010, Bidelman, 2015a, 2018, Coffey et al 2017b). Our objective is two-fold: demonstrate that the FFR contains information beyond stimulus properties and broad group-level markers, and to assess the practical viability of the FFR as a biometric across different sounds, auditory experiences, and recording days. APPROACH We trained the hidden Markov model (HMM) to decode listener identity from FFR spectro-temporal patterns across multiple frequency bands. Our dataset included FFRs from twenty native speakers of English or Mandarin Chinese (10 per group) listening to Mandarin Chinese tones across three EEG sessions separated by days. We decoded subject identity within the same auditory context (same tone and session) and across different stimuli and recording sessions. MAIN RESULTS The HMM decoded listeners for averaging sizes as small as one single FFR. However, model performance improved for larger averaging sizes (e.g. 25 FFRs), similarity in auditory context (same tone and day), and lack of familiarity with the sounds (i.e. native English relative to native Chinese listeners). Our results also revealed important biometric contributions from frequency bands in the cortical and subcortical EEG. SIGNIFICANCE Our study provides the first deep and systematic biometric characterization of the FFR and provides the basis for biometric identification systems incorporating this neural signal.
Collapse
Affiliation(s)
- Fernando Llanos
- Department of Communication Sciences and Disorders, University of Pittsburgh, Pittsburgh, PA 15213, United States of America
| | | | | |
Collapse
|
45
|
Analysis of the components of Frequency-Following Response in phonological disorders. Int J Pediatr Otorhinolaryngol 2019; 122:47-51. [PMID: 30959337 DOI: 10.1016/j.ijporl.2019.03.035] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/18/2019] [Revised: 03/31/2019] [Accepted: 03/31/2019] [Indexed: 11/23/2022]
Abstract
INTRODUCTION When identifying the auditory performance of children with phonological disorders, researchers assume that this population has normal peripheral hearing. However, responses at more central levels might be atypical. OBJECTIVE To investigate the effect of phonological disorders on Frequency-Following Responses (FFRs) in the time domain. METHODS Participants were 60 subjects, aged 5 to 8:11 years, divided into two groups: a control group, composed of 30 subjects with normal language skills; and a study group composed of 30 subjects diagnosed with Phonological Disorder (PD). All subjects were tested for Frequency-Following Responses. RESULTS In the group of children with PD there was an increase in the latency of all FFR components, with a significant statistical difference for components V (p = 0.015); A (<0.001); C (0.022); F (<0.001); and O (0.001). There was also a reduction in the Slope measure in the group with PD (p = 0.004). CONCLUSION The FFR responses are altered in children with PD. This suggests that children with PD present a disorganization in the neural coding of complex sounds. This could compromise specially the development of linguistic/phonological abilities, which can reflect in daily communication.
Collapse
|
46
|
Xu N, Luo L, Wang Q, Li L. Binaural unmasking of the accuracy of envelope-signal representation in rat auditory cortex but not auditory midbrain. Hear Res 2019; 377:224-233. [PMID: 30991272 DOI: 10.1016/j.heares.2019.04.003] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/26/2018] [Revised: 03/25/2019] [Accepted: 04/03/2019] [Indexed: 01/16/2023]
Abstract
Accurate neural representations of acoustic signals under noisy conditions are critical for animals' survival. Detecting signal against background noise can be improved by binaural hearing particularly when an interaural-time-difference (ITD) disparity is introduced between the signal and the noise, a phenomenon known as binaural unmasking. Previous studies have mainly focused on the binaural unmasking effect on response magnitudes, and it is not clear whether binaural unmasking affects the accuracy of central representations of target acoustic signals and the relative contributions of different central auditory structures to this accuracy. Frequency following responses (FFRs), which are sustained phase-locked neural activities, can be used for measuring the accuracy of the representation of signals. Using intracranial recordings of local field potentials, this study aimed to assess whether the binaural unmasking effects include an improvement of the accuracy of neural representations of sound-envelope signals in the rat IC and/or auditory cortex (AC). The results showed that (1) when a narrow-band noise was presented binaurally, the stimulus-response (S-R) coherence of the FFRs to the envelope (FFRenvelope) of the narrow-band noise recorded in the IC was higher than that recorded in the AC. (2) Presenting a broad-band masking noise caused a larger reduction of the S-R coherence for FFRenvelope in the IC than that in the AC. (3) Introducing an ITD disparity between the narrow-band signal noise and the broad-band masking noise did not affect the IC S-R coherence, but enhanced both the AC S-R coherence and the coherence between the IC FFRenvelope and AC FFRenvelope. Thus, although the accuracy of representing envelope signals in the AC is lower than that in the IC, it can be binaurally unmasked, indicating a binaural-unmasking mechanism that is formed during the signal transmission from the IC to the AC.
Collapse
Affiliation(s)
- Na Xu
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Lu Luo
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China
| | - Qian Wang
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Beijing Key Laboratory of Epilepsy, Epilepsy Center, Department of Functional Neurosurgery, Sanbo Brain Hospital, Capital Medical University, Beijing, 100093, China
| | - Liang Li
- School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, Beijing, 100080, China; Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), Peking University, Beijing, 100871, China; Beijing Institute for Brain Disorders, Beijing, 100096, China.
| |
Collapse
|
47
|
Carbajal GV, Malmierca MS. The Neuronal Basis of Predictive Coding Along the Auditory Pathway: From the Subcortical Roots to Cortical Deviance Detection. Trends Hear 2019; 22:2331216518784822. [PMID: 30022729 PMCID: PMC6053868 DOI: 10.1177/2331216518784822] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/31/2023] Open
Abstract
In this review, we attempt to integrate the empirical evidence regarding stimulus-specific adaptation (SSA) and mismatch negativity (MMN) under a predictive coding perspective (also known as Bayesian or hierarchical-inference model). We propose a renewed methodology for SSA study, which enables a further decomposition of deviance detection into repetition suppression and prediction error, thanks to the use of two controls previously introduced in MMN research: the many-standards and the cascade sequences. Focusing on data obtained with cellular recordings, we explain how deviance detection and prediction error are generated throughout hierarchical levels of processing, following two vectors of increasing computational complexity and abstraction along the auditory neuraxis: from subcortical toward cortical stations and from lemniscal toward nonlemniscal divisions. Then, we delve into the particular characteristics and contributions of subcortical and cortical structures to this generative mechanism of hierarchical inference, analyzing what is known about the role of neuromodulation and local microcircuitry in the emergence of mismatch signals. Finally, we describe how SSA and MMN are occurring at similar time frame and cortical locations, and both are affected by the manipulation of N-methyl- D-aspartate receptors. We conclude that there is enough empirical evidence to consider SSA and MMN, respectively, as the microscopic and macroscopic manifestations of the same physiological mechanism of deviance detection in the auditory cortex. Hence, the development of a common theoretical framework for SSA and MMN is all the more recommendable for future studies. In this regard, we suggest a shared nomenclature based on the predictive coding interpretation of deviance detection.
Collapse
Affiliation(s)
- Guillermo V Carbajal
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain
| | - Manuel S Malmierca
- 1 Auditory Neuroscience Laboratory (Lab 1), Institute of Neuroscience of Castile and León, University of Salamanca, Salamanca, Spain.,2 Salamanca Institute for Biomedical Research, Spain.,3 Department of Cell Biology and Pathology, Faculty of Medicine, University of Salamanca, Spain
| |
Collapse
|
48
|
Zhang X, Gong Q. Frequency-Following Responses to Complex Tones at Different Frequencies Reflect Different Source Configurations. Front Neurosci 2019; 13:130. [PMID: 30872990 PMCID: PMC6402474 DOI: 10.3389/fnins.2019.00130] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2018] [Accepted: 02/05/2019] [Indexed: 11/13/2022] Open
Abstract
The neural generators of the frequency-following response (FFR), a neural response widely used to study the human auditory system, remain unclear. There is evidence that the balance between cortical and subcortical contributions to the FFR varies with stimulus frequency. In this study, we tried to clarify whether this variation extended to subcortical nuclei at higher stimulus frequencies where cortical sources were inactive. We evoked FFRs, in 17 human listeners with normal hearing (9 female), with three complex tones with missing-fundamentals corresponding to musical tones C4 (262 Hz), E4 (330 Hz), and G4 (393 Hz) presented to left, right, or both ears. Source imaging results confirmed the dominance of subcortical activity underlying both fundamental frequency (F0) and second harmonic (H2) components of the FFR. Importantly, several FFR features (spatial complexity, scalp distributions of spectral strength and inter-trial phase coherence, and functional connectivity patterns) varied systematically with stimulus F0, suggesting an unfixed source configuration. We speculated that the variation of FFR source configuration with stimulus frequency resulted from changing relative contributions of subcortical nuclei. Supportively, topographic comparison between the FFR and the auditory brainstem response (ABR) evoked by clicks revealed that the topography of the F0 component resembled that of the click-ABR at an earlier latency when stimulus F0 was higher and that the topography of the H2 component resembled that of the click-ABR at a nearly fixed latency regardless of stimulus F0, particularly for binaurally evoked FFRs. Possible generation sites of the FFR and implications for future studies were discussed.
Collapse
Affiliation(s)
- Xiaochen Zhang
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China
| | - Qin Gong
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing, China.,Research Center of Biomedical Engineering, Graduate School at Shenzhen, Tsinghua University, Shenzhen, China
| |
Collapse
|
49
|
Interactive effects of linguistic abstraction and stimulus statistics in the online modulation of neural speech encoding. Atten Percept Psychophys 2018; 81:1020-1033. [PMID: 30565097 DOI: 10.3758/s13414-018-1621-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Speech processing is highly modulated by context. Prior studies examining frequency-following responses (FFRs), an electrophysiological 'neurophonic' potential that faithfully reflects phase-locked activity from neural ensembles within the auditory network, have demonstrated that stimulus context modulates the integrity of speech encoding. The extent to which context-dependent encoding reflects general auditory properties or interactivities between statistical and higher-level linguistic processes remains unexplored. Our study examined whether speech encoding, as reflected by FFRs, is modulated by abstract phonological relationships between a stimulus and surrounding contexts. FFRs were elicited to a Mandarin rising-tone syllable (/ji-TR/, 'second') randomly presented with other syllables in three contexts from 17 native listeners. In a contrastive context, /ji-TR/ occurred with meaning-contrastive high-level-tone syllables (/ji-H/, 'one'). In an allotone context, TR occurred with dipping-tone syllables /ji-D/, a non-meaning-contrastive variant of /ji-TR/. In a repetitive context, the same /ji-TR/ occurred with other speech tokens of /ji-TR/. Consistent with prior work, neural tracking of /ji-TR/ pitch contour was more faithful in the repetitive condition wherein /ji-TR/ occurred more predictably (p = 1) than in the contrastive condition (p = 0.34). Crucially, in the allotone context, neural tracking of /ji-TR/ was more accurate relative to the contrastive context, despite both having an identical transitional probability (p = 0.34). Mechanistically, the non-meaning-contrastive relationship may have augmented the probability to /ji-TR/ occurrence in the allotone context. Results indicate online interactions between bottom-up and top-down mechanisms, which facilitate speech perception. Such interactivities may predictively fine-tune incoming speech encoding using linguistic and statistical information from prior context.
Collapse
|
50
|
Revisiting the Contribution of Auditory Cortex to Frequency-Following Responses. J Neurosci 2018; 37:5218-5220. [PMID: 28539348 DOI: 10.1523/jneurosci.0794-17.2017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2017] [Revised: 04/17/2017] [Accepted: 04/21/2017] [Indexed: 11/21/2022] Open
|