1
|
Edalati M, Wallois F, Ghostine G, Kongolo G, Trainor LJ, Moghimi S. Neural oscillations suggest periodicity encoding during auditory beat processing in the premature brain. Dev Sci 2024; 27:e13550. [PMID: 39010656 DOI: 10.1111/desc.13550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Revised: 06/25/2024] [Accepted: 06/26/2024] [Indexed: 07/17/2024]
Abstract
When exposed to rhythmic patterns with temporal regularity, adults exhibit an inherent ability to extract and anticipate an underlying sequence of regularly spaced beats, which is internally constructed, as beats are experienced even when no events occur at beat positions (e.g., in the case of rests). Perception of rhythm and synchronization to periodicity is indispensable for development of cognitive functions, social interaction, and adaptive behavior. We evaluated neural oscillatory activity in premature newborns (n = 19, mean age, 32 ± 2.59 weeks gestational age) during exposure to an auditory rhythmic sequence, aiming to identify early traces of periodicity encoding and rhythm processing through entrainment of neural oscillations at this stage of neurodevelopment. The rhythmic sequence elicited a systematic modulation of alpha power, synchronized to expected beat locations coinciding with both tones and rests, and independent of whether the beat was preceded by tone or rest. In addition, the periodic alpha-band fluctuations reached maximal power slightly before the corresponding beat onset times. Together, our results show neural encoding of periodicity in the premature brain involving neural oscillations in the alpha range that are much faster than the beat tempo, through alignment of alpha power to the beat tempo, consistent with observations in adults on predictive processing of temporal regularities in auditory rhythms. RESEARCH HIGHLIGHTS: In response to the presented rhythmic pattern, systematic modulations of alpha power showed that the premature brain extracted the temporal regularity of the underlying beat. In contrast to evoked potentials, which are greatly reduced when there is no sounds event, the modulation of alpha power occurred for beats coinciding with both tones and rests in a predictive way. The findings provide the first evidence for the neural coding of periodicity in auditory rhythm perception before the age of term.
Collapse
Affiliation(s)
- Mohammadreza Edalati
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Fabrice Wallois
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
- Inserm UMR1105, EFSN Pédiatriques, Amiens University Hospital, Amiens Cedex, France
| | - Ghida Ghostine
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Guy Kongolo
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
| | - Laurel J Trainor
- Department of Psychology, Neuroscience and Behaviour, McMaster University, Hamilton, Ontario, Canada
- McMaster Institute for Music and the Mind, McMaster University, Hamilton, Ontario, Canada
- Rotman Research Institute, Baycrest Hospital, Toronto, Ontario, Canada
| | - Sahar Moghimi
- Inserm UMR1105, Groupe de Recherches sur l'Analyse Multimodale de la Fonction Cérébrale, Université de Picardie Jules Verve, Amiens Cedex, France
- Inserm UMR1105, EFSN Pédiatriques, Amiens University Hospital, Amiens Cedex, France
| |
Collapse
|
2
|
Weissbart H, Martin AE. The structure and statistics of language jointly shape cross-frequency neural dynamics during spoken language comprehension. Nat Commun 2024; 15:8850. [PMID: 39397036 PMCID: PMC11471778 DOI: 10.1038/s41467-024-53128-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 09/30/2024] [Indexed: 10/15/2024] Open
Abstract
Humans excel at extracting structurally-determined meaning from speech despite inherent physical variability. This study explores the brain's ability to predict and understand spoken language robustly. It investigates the relationship between structural and statistical language knowledge in brain dynamics, focusing on phase and amplitude modulation. Using syntactic features from constituent hierarchies and surface statistics from a transformer model as predictors of forward encoding models, we reconstructed cross-frequency neural dynamics from MEG data during audiobook listening. Our findings challenge a strict separation of linguistic structure and statistics in the brain, with both aiding neural signal reconstruction. Syntactic features have a more temporally spread impact, and both word entropy and the number of closing syntactic constituents are linked to the phase-amplitude coupling of neural dynamics, implying a role in temporal prediction and cortical oscillation alignment during speech processing. Our results indicate that structured and statistical information jointly shape neural dynamics during spoken language comprehension and suggest an integration process via a cross-frequency coupling mechanism.
Collapse
Affiliation(s)
- Hugo Weissbart
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands.
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.
| | - Andrea E Martin
- Donders Centre for Cognitive Neuroimaging, Radboud University, Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| |
Collapse
|
3
|
Reybrouck M, Podlipniak P, Welch D. Music Listening as Exploratory Behavior: From Dispositional Reactions to Epistemic Interactions with the Sonic World. Behav Sci (Basel) 2024; 14:825. [PMID: 39336040 PMCID: PMC11429034 DOI: 10.3390/bs14090825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2024] [Revised: 08/21/2024] [Accepted: 09/11/2024] [Indexed: 09/30/2024] Open
Abstract
Listening to music can span a continuum from passive consumption to active exploration, relying on processes of coping with the sounds as well as higher-level processes of sense-making. Revolving around the major questions of "what" and "how" to explore, this paper takes a naturalistic stance toward music listening, providing tools to objectively describe the underlying mechanisms of musical sense-making by weakening the distinction between music and non-music. Starting from a non-exclusionary conception of "coping" with the sounds, it stresses the exploratory approach of treating music as a sound environment to be discovered by an attentive listener. Exploratory listening, in this view, is an open-minded and active process, not dependent on simply recalling pre-existing knowledge or information that reduces cognitive processing efforts but having a high cognitive load due to the need for highly focused attention and perceptual readiness. Music, explored in this way, is valued for its complexity, surprisingness, novelty, incongruity, puzzlingness, and patterns, relying on processes of selection, differentiation, discrimination, and identification.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, Faculty of Arts, KU Leuven-University of Leuven, 3000 Leuven, Belgium
- Institute for Psychoacoustics and Electronic Music (IPEM), Department of Art History, Musicology and Theatre Studies, 9000 Ghent, Belgium
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, 61-712 Poznań, Poland
| | - David Welch
- Institute Audiology Section, School of Population Health, University of Auckland, Auckland 2011, New Zealand
| |
Collapse
|
4
|
Chalas N, Meyer L, Lo CW, Park H, Kluger DS, Abbasi O, Kayser C, Nitsch R, Gross J. Dissociating prosodic from syntactic delta activity during natural speech comprehension. Curr Biol 2024; 34:3537-3549.e5. [PMID: 39047734 DOI: 10.1016/j.cub.2024.06.072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2024] [Revised: 06/24/2024] [Accepted: 06/27/2024] [Indexed: 07/27/2024]
Abstract
Decoding human speech requires the brain to segment the incoming acoustic signal into meaningful linguistic units, ranging from syllables and words to phrases. Integrating these linguistic constituents into a coherent percept sets the root of compositional meaning and hence understanding. One important cue for segmentation in natural speech is prosodic cues, such as pauses, but their interplay with higher-level linguistic processing is still unknown. Here, we dissociate the neural tracking of prosodic pauses from the segmentation of multi-word chunks using magnetoencephalography (MEG). We find that manipulating the regularity of pauses disrupts slow speech-brain tracking bilaterally in auditory areas (below 2 Hz) and in turn increases left-lateralized coherence of higher-frequency auditory activity at speech onsets (around 25-45 Hz). Critically, we also find that multi-word chunks-defined as short, coherent bundles of inter-word dependencies-are processed through the rhythmic fluctuations of low-frequency activity (below 2 Hz) bilaterally and independently of prosodic cues. Importantly, low-frequency alignment at chunk onsets increases the accuracy of an encoding model in bilateral auditory and frontal areas while controlling for the effect of acoustics. Our findings provide novel insights into the neural basis of speech perception, demonstrating that both acoustic features (prosodic cues) and abstract linguistic processing at the multi-word timescale are underpinned independently by low-frequency electrophysiological brain activity in the delta frequency range.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany; Institute for Translational Neuroscience, University of Münster, Münster, Germany.
| | - Lars Meyer
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Chia-Wen Lo
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
| | - Hyojin Park
- Centre for Human Brain Health (CHBH), School of Psychology, University of Birmingham, Birmingham, UK
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, 33615 Bielefeld, Germany
| | - Robert Nitsch
- Institute for Translational Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany; Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
5
|
te Rietmolen N, Mercier MR, Trébuchon A, Morillon B, Schön D. Speech and music recruit frequency-specific distributed and overlapping cortical networks. eLife 2024; 13:RP94509. [PMID: 39038076 PMCID: PMC11262799 DOI: 10.7554/elife.94509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/24/2024] Open
Abstract
To what extent does speech and music processing rely on domain-specific and domain-general neural networks? Using whole-brain intracranial EEG recordings in 18 epilepsy patients listening to natural, continuous speech or music, we investigated the presence of frequency-specific and network-level brain activity. We combined it with a statistical approach in which a clear operational distinction is made between shared, preferred, and domain-selective neural responses. We show that the majority of focal and network-level neural activity is shared between speech and music processing. Our data also reveal an absence of anatomical regional selectivity. Instead, domain-selective neural responses are restricted to distributed and frequency-specific coherent oscillations, typical of spectral fingerprints. Our work highlights the importance of considering natural stimuli and brain dynamics in their full complexity to map cognitive and brain functions.
Collapse
Affiliation(s)
- Noémie te Rietmolen
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Manuel R Mercier
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Agnès Trébuchon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
- APHM, Hôpital de la Timone, Service de Neurophysiologie CliniqueMarseilleFrance
| | - Benjamin Morillon
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| | - Daniele Schön
- Institute for Language, Communication, and the Brain, Aix-Marseille UniversityMarseilleFrance
- Aix Marseille Université, INSERM, INS, Institut de Neurosciences des SystèmesMarseilleFrance
| |
Collapse
|
6
|
Kim H, Kim JS, Chung CK. Visual Mental Imagery and Neural Dynamics of Sensory Substitution in the Blindfolded Subjects. Neuroimage 2024; 295:120621. [PMID: 38797383 DOI: 10.1016/j.neuroimage.2024.120621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/29/2024] Open
Abstract
Although one can recognize the environment by soundscape substituting vision to auditory signal, whether subjects could perceive the soundscape as visual or visual-like sensation has been questioned. In this study, we investigated hierarchical process to elucidate the recruitment mechanism of visual areas by soundscape stimuli in blindfolded subjects. Twenty-two healthy subjects were repeatedly trained to recognize soundscape stimuli converted by visual shape information of letters. An effective connectivity method called dynamic causal modeling (DCM) was employed to reveal how the brain was hierarchically organized to recognize soundscape stimuli. The visual mental imagery model generated cortical source signals of five regions of interest better than auditory bottom-up, cross-modal perception, and mixed models. Spectral couplings between brain areas in the visual mental imagery model were analyzed. While within-frequency coupling is apparent in bottom-up processing where sensory information is transmitted, cross-frequency coupling is prominent in top-down processing, corresponding to the expectation and interpretation of information. Sensory substitution in the brain of blindfolded subjects derived visual mental imagery by combining bottom-up and top-down processing.
Collapse
Affiliation(s)
- HongJune Kim
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea
| | - June Sic Kim
- Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea; Research Institute of Biomedical Science & Technology, Konkuk University, Seoul, Republic of Korea.
| | - Chun Kee Chung
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; Dept. of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| |
Collapse
|
7
|
Paraskevopoulos E, Anagnostopoulou A, Chalas N, Karagianni M, Bamidis P. Unravelling the multisensory learning advantage: Different patterns of within and across frequency-specific interactions drive uni- and multisensory neuroplasticity. Neuroimage 2024; 291:120582. [PMID: 38521212 DOI: 10.1016/j.neuroimage.2024.120582] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 03/12/2024] [Accepted: 03/20/2024] [Indexed: 03/25/2024] Open
Abstract
In the field of learning theory and practice, the superior efficacy of multisensory learning over uni-sensory is well-accepted. However, the underlying neural mechanisms at the macro-level of the human brain remain largely unexplored. This study addresses this gap by providing novel empirical evidence and a theoretical framework for understanding the superiority of multisensory learning. Through a cognitive, behavioral, and electroencephalographic assessment of carefully controlled uni-sensory and multisensory training interventions, our study uncovers a fundamental distinction in their neuroplastic patterns. A multilayered network analysis of pre- and post- training EEG data allowed us to model connectivity within and across different frequency bands at the cortical level. Pre-training EEG analysis unveils a complex network of distributed sources communicating through cross-frequency coupling, while comparison of pre- and post-training EEG data demonstrates significant differences in the reorganizational patterns of uni-sensory and multisensory learning. Uni-sensory training primarily modifies cross-frequency coupling between lower and higher frequencies, whereas multisensory training induces changes within the beta band in a more focused network, implying the development of a unified representation of audiovisual stimuli. In combination with behavioural and cognitive findings this suggests that, multisensory learning benefits from an automatic top-down transfer of training, while uni-sensory training relies mainly on limited bottom-up generalization. Our findings offer a compelling theoretical framework for understanding the advantage of multisensory learning.
Collapse
Affiliation(s)
| | - Alexandra Anagnostopoulou
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Nikolas Chalas
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Germany
| | - Maria Karagianni
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| | - Panagiotis Bamidis
- School of Medicine, Faculty of Health Sciences, Aristotle University of Thessaloniki, Thessaloniki, Greece
| |
Collapse
|
8
|
Brilliant, Yaar-Soffer Y, Herrmann CS, Henkin Y, Kral A. Theta and alpha oscillatory signatures of auditory sensory and cognitive loads during complex listening. Neuroimage 2024; 289:120546. [PMID: 38387743 DOI: 10.1016/j.neuroimage.2024.120546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 02/07/2024] [Accepted: 02/15/2024] [Indexed: 02/24/2024] Open
Abstract
The neuronal signatures of sensory and cognitive load provide access to brain activities related to complex listening situations. Sensory and cognitive loads are typically reflected in measures like response time (RT) and event-related potentials (ERPs) components. It's, however, strenuous to distinguish the underlying brain processes solely from these measures. In this study, along with RT- and ERP-analysis, we performed time-frequency analysis and source localization of oscillatory activity in participants performing two different auditory tasks with varying degrees of complexity and related them to sensory and cognitive load. We studied neuronal oscillatory activity in both periods before the behavioral response (pre-response) and after it (post-response). Robust oscillatory activities were found in both periods and were differentially affected by sensory and cognitive load. Oscillatory activity under sensory load was characterized by decrease in pre-response (early) theta activity and increased alpha activity. Oscillatory activity under cognitive load was characterized by increased theta activity, mainly in post-response (late) time. Furthermore, source localization revealed specific brain regions responsible for processing these loads, such as temporal and frontal lobe, cingulate cortex and precuneus. The results provide evidence that in complex listening situations, the brain processes sensory and cognitive loads differently. These neural processes have specific oscillatory signatures and are long lasting, extending beyond the behavioral response.
Collapse
Affiliation(s)
- Brilliant
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany.
| | - Y Yaar-Soffer
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - C S Herrmann
- Experimental Psychology Division, University of Oldenburg, 26111 Oldenburg, Germany
| | - Y Henkin
- Department of Communication Disorder, Tel Aviv University, 5262657 Tel Aviv, Israel; Hearing, Speech and Language Center, Sheba Medical Center, 5265601 Tel Hashomer, Israel
| | - A Kral
- Department of Experimental Otology, Hannover Medical School, 30625 Hannover, Germany
| |
Collapse
|
9
|
Nourski KV, Steinschneider M, Rhone AE, Dappen ER, Kawasaki H, Howard MA. Processing of auditory novelty in human cortex during a semantic categorization task. Hear Res 2024; 444:108972. [PMID: 38359485 PMCID: PMC10984345 DOI: 10.1016/j.heares.2024.108972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/05/2024] [Accepted: 02/10/2024] [Indexed: 02/17/2024]
Abstract
Auditory semantic novelty - a new meaningful sound in the context of a predictable acoustical environment - can probe neural circuits involved in language processing. Aberrant novelty detection is a feature of many neuropsychiatric disorders. This large-scale human intracranial electrophysiology study examined the spatial distribution of gamma and alpha power and auditory evoked potentials (AEP) associated with responses to unexpected words during performance of semantic categorization tasks. Participants were neurosurgical patients undergoing monitoring for medically intractable epilepsy. Each task included repeatedly presented monosyllabic words from different talkers ("common") and ten words presented only once ("novel"). Targets were words belonging to a specific semantic category. Novelty effects were defined as differences between neural responses to novel and common words. Novelty increased task difficulty and was associated with augmented gamma, suppressed alpha power, and AEP differences broadly distributed across the cortex. Gamma novelty effect had the highest prevalence in planum temporale, posterior superior temporal gyrus (STG) and pars triangularis of the inferior frontal gyrus; alpha in anterolateral Heschl's gyrus (HG), anterior STG and middle anterior cingulate cortex; AEP in posteromedial HG, lower bank of the superior temporal sulcus, and planum polare. Gamma novelty effect had a higher prevalence in dorsal than ventral auditory-related areas. Novelty effects were more pronounced in the left hemisphere. Better novel target detection was associated with reduced gamma novelty effect within auditory cortex and enhanced gamma effect within prefrontal and sensorimotor cortex. Alpha and AEP novelty effects were generally more prevalent in better performing participants. Multiple areas, including auditory cortex on the superior temporal plane, featured AEP novelty effect within the time frame of P3a and N400 scalp-recorded novelty-related potentials. This work provides a detailed account of auditory novelty in a paradigm that directly examined brain regions associated with semantic processing. Future studies may aid in the development of objective measures to assess the integrity of semantic novelty processing in clinical populations.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States.
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Departments of Neurology, Neuroscience, and Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, United States
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Emily R Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, United States
| |
Collapse
|
10
|
Borderie A, Caclin A, Lachaux JP, Perrone-Bertollotti M, Hoyer RS, Kahane P, Catenoix H, Tillmann B, Albouy P. Cross-frequency coupling in cortico-hippocampal networks supports the maintenance of sequential auditory information in short-term memory. PLoS Biol 2024; 22:e3002512. [PMID: 38442128 PMCID: PMC10914261 DOI: 10.1371/journal.pbio.3002512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 01/22/2024] [Indexed: 03/07/2024] Open
Abstract
It has been suggested that cross-frequency coupling in cortico-hippocampal networks enables the maintenance of multiple visuo-spatial items in working memory. However, whether this mechanism acts as a global neural code for memory retention across sensory modalities remains to be demonstrated. Intracranial EEG data were recorded while drug-resistant patients with epilepsy performed a delayed matched-to-sample task with tone sequences. We manipulated task difficulty by varying the memory load and the duration of the silent retention period between the to-be-compared sequences. We show that the strength of theta-gamma phase amplitude coupling in the superior temporal sulcus, the inferior frontal gyrus, the inferior temporal gyrus, and the hippocampus (i) supports the short-term retention of auditory sequences; (ii) decodes correct and incorrect memory trials as revealed by machine learning analysis; and (iii) is positively correlated with individual short-term memory performance. Specifically, we show that successful task performance is associated with consistent phase coupling in these regions across participants, with gamma bursts restricted to specific theta phase ranges corresponding to higher levels of neural excitability. These findings highlight the role of cortico-hippocampal activity in auditory short-term memory and expand our knowledge about the role of cross-frequency coupling as a global biological mechanism for information processing, integration, and memory in the human brain.
Collapse
Affiliation(s)
- Arthur Borderie
- CERVO Brain Research Center, School of Psychology, Laval University, Québec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), CRBLM, Montreal, Canada
| | - Anne Caclin
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
| | - Jean-Philippe Lachaux
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
| | | | - Roxane S. Hoyer
- CERVO Brain Research Center, School of Psychology, Laval University, Québec, Canada
| | - Philippe Kahane
- Univ. Grenoble Alpes, Inserm, U1216, CHU Grenoble Alpes, Grenoble Institut Neurosciences, Grenoble, France
| | - Hélène Catenoix
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
- Department of Functional Neurology and Epileptology, Lyon Civil Hospices, member of the ERN EpiCARE, and Lyon 1 University, Lyon, France
| | - Barbara Tillmann
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
- Laboratory for Research on Learning and Development, LEAD–CNRS UMR5022, Université de Bourgogne, Dijon, France
| | - Philippe Albouy
- CERVO Brain Research Center, School of Psychology, Laval University, Québec, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS), CRBLM, Montreal, Canada
- Université Claude Bernard Lyon 1, CNRS, INSERM, Centre de Recherche en Neurosciences de Lyon CRNL U1028 UMR5292, Bron, France
| |
Collapse
|
11
|
Nourski KV, Steinschneider M, Rhone AE, Berger JI, Dappen ER, Kawasaki H, Howard III MA. Intracranial electrophysiology of spectrally degraded speech in the human cortex. Front Hum Neurosci 2024; 17:1334742. [PMID: 38318272 PMCID: PMC10839784 DOI: 10.3389/fnhum.2023.1334742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 12/28/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Cochlear implants (CIs) are the treatment of choice for severe to profound hearing loss. Variability in CI outcomes remains despite advances in technology and is attributed in part to differences in cortical processing. Studying these differences in CI users is technically challenging. Spectrally degraded stimuli presented to normal-hearing individuals approximate input to the central auditory system in CI users. This study used intracranial electroencephalography (iEEG) to investigate cortical processing of spectrally degraded speech. Methods Participants were adult neurosurgical epilepsy patients. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands) or presented without vocoding. The stimuli were presented in a two-alternative forced choice task. Cortical activity was recorded using depth and subdural iEEG electrodes. Electrode coverage included auditory core in posteromedial Heschl's gyrus (HGPM), superior temporal gyrus (STG), ventral and dorsal auditory-related areas, and prefrontal and sensorimotor cortex. Analysis focused on high gamma (70-150 Hz) power augmentation and alpha (8-14 Hz) suppression. Results Chance task performance occurred with 1-2 spectral bands and was near-ceiling for clear stimuli. Performance was variable with 3-4 bands, permitting identification of good and poor performers. There was no relationship between task performance and participants demographic, audiometric, neuropsychological, or clinical profiles. Several response patterns were identified based on magnitude and differences between stimulus conditions. HGPM responded strongly to all stimuli. A preference for clear speech emerged within non-core auditory cortex. Good performers typically had strong responses to all stimuli along the dorsal stream, including posterior STG, supramarginal, and precentral gyrus; a minority of sites in STG and supramarginal gyrus had a preference for vocoded stimuli. In poor performers, responses were typically restricted to clear speech. Alpha suppression was more pronounced in good performers. In contrast, poor performers exhibited a greater involvement of posterior middle temporal gyrus when listening to clear speech. Discussion Responses to noise-vocoded speech provide insights into potential factors underlying CI outcome variability. The results emphasize differences in the balance of neural processing along the dorsal and ventral stream between good and poor performers, identify specific cortical regions that may have diagnostic and prognostic utility, and suggest potential targets for neuromodulation-based CI rehabilitation strategies.
Collapse
Affiliation(s)
- Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Ariane E. Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Joel I. Berger
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Emily R. Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Matthew A. Howard III
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
- Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, United States
| |
Collapse
|
12
|
Toker D, Müller E, Miyamoto H, Riga MS, Lladó-Pelfort L, Yamakawa K, Artigas F, Shine JM, Hudson AE, Pouratian N, Monti MM. Criticality supports cross-frequency cortical-thalamic information transfer during conscious states. eLife 2024; 13:e86547. [PMID: 38180472 PMCID: PMC10805384 DOI: 10.7554/elife.86547] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Accepted: 11/27/2023] [Indexed: 01/06/2024] Open
Abstract
Consciousness is thought to be regulated by bidirectional information transfer between the cortex and thalamus, but the nature of this bidirectional communication - and its possible disruption in unconsciousness - remains poorly understood. Here, we present two main findings elucidating mechanisms of corticothalamic information transfer during conscious states. First, we identify a highly preserved spectral channel of cortical-thalamic communication that is present during conscious states, but which is diminished during the loss of consciousness and enhanced during psychedelic states. Specifically, we show that in humans, mice, and rats, information sent from either the cortex or thalamus via δ/θ/α waves (∼1-13 Hz) is consistently encoded by the other brain region by high γ waves (52-104 Hz); moreover, unconsciousness induced by propofol anesthesia or generalized spike-and-wave seizures diminishes this cross-frequency communication, whereas the psychedelic 5-methoxy-N,N-dimethyltryptamine (5-MeO-DMT) enhances this low-to-high frequency interregional communication. Second, we leverage numerical simulations and neural electrophysiology recordings from the thalamus and cortex of human patients, rats, and mice to show that these changes in cross-frequency cortical-thalamic information transfer may be mediated by excursions of low-frequency thalamocortical electrodynamics toward/away from edge-of-chaos criticality, or the phase transition from stability to chaos. Overall, our findings link thalamic-cortical communication to consciousness, and further offer a novel, mathematically well-defined framework to explain the disruption to thalamic-cortical information transfer during unconscious states.
Collapse
Affiliation(s)
- Daniel Toker
- Department of Neurology, University of California, Los AngelesLos AngelesUnited States
- Department of Psychology, University of California, Los AngelesLos AngelesUnited States
| | - Eli Müller
- Brain and Mind Centre, University of SydneySydneyAustralia
| | - Hiroyuki Miyamoto
- Laboratory for Neurogenetics, RIKEN Center for Brain ScienceSaitamaJapan
- PRESTO, Japan Science and Technology AgencySaitamaJapan
- International Research Center for Neurointelligence, University of TokyoNagoyaJapan
| | - Maurizio S Riga
- Andalusian Center for Molecular Biology and Regenerative MedicineSevilleSpain
| | - Laia Lladó-Pelfort
- Departament de Ciències Bàsiques, Universitat de Vic-Universitat Central de CatalunyaBarcelonaSpain
| | - Kazuhiro Yamakawa
- Laboratory for Neurogenetics, RIKEN Center for Brain ScienceSaitamaJapan
- Department of Neurodevelopmental Disorder Genetics, Institute of Brain Science, Nagoya City University Graduate School of Medical ScienceNagoyaJapan
| | - Francesc Artigas
- Departament de Neurociències i Terapèutica Experimental, CSIC-Institut d’Investigacions Biomèdiques de BarcelonaBarcelonaSpain
- Institut d’Investigacions Biomèdiques August Pi i Sunyer (IDIBAPS)BarcelonaSpain
- Centro de Investigación Biomédica en Red de Salud Mental (CIBERSAM), Instituto de Salud Carlos IIIMadridSpain
| | - James M Shine
- Brain and Mind Centre, University of SydneySydneyAustralia
| | - Andrew E Hudson
- Department of Anesthesiology, Veterans Affairs Greater Los Angeles Healthcare SystemLos AngelesUnited States
- Department of Anesthesiology and Perioperative Medicine, University of California, Los AngelesLos AngelesUnited States
| | - Nader Pouratian
- Department of Neurological Surgery, UT Southwestern Medical CenterDallasUnited States
| | - Martin M Monti
- Department of Psychology, University of California, Los AngelesLos AngelesUnited States
- Department of Neurosurgery, University of California, Los AngelesLos AngelesUnited States
| |
Collapse
|
13
|
Jin H, Witjes B, Roy M, Baillet S, de Vos CC. Neurophysiological oscillatory markers of hypoalgesia in conditioned pain modulation. Pain Rep 2023; 8:e1096. [PMID: 37881810 PMCID: PMC10597579 DOI: 10.1097/pr9.0000000000001096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2023] [Revised: 06/27/2023] [Accepted: 07/10/2023] [Indexed: 10/27/2023] Open
Abstract
Introduction Conditioned pain modulation (CPM) is an experimental procedure that consists of an ongoing noxious stimulus attenuating the pain perception caused by another noxious stimulus. A combination of the CPM paradigm with concurrent electrophysiological recordings can establish whether an association exists between experimentally modified pain perception and modulations of neural oscillations. Objectives We aimed to characterize how CPM modifies pain perception and underlying neural oscillations. We also interrogated whether these perceptual and/or neurophysiological effects are distinct in patients affected by chronic pain. Methods We presented noxious electrical stimuli to the right ankle before, during, and after CPM induced by an ice pack placed on the left forearm. Seventeen patients with chronic pain and 17 control participants rated the electrical pain in each experimental condition. We used magnetoencephalography to examine the anatomy-specific effects of CPM on the neural oscillatory responses to the electrical pain. Results Regardless of the participant groups, CPM induced a reduction in subjective pain ratings and neural responses (beta-band [15-35 Hz] oscillations in the sensorimotor cortex) to electrical pain. Conclusion Our findings of pain-induced beta-band activity may be associated with top-down modulations of pain, as reported in other perceptual modalities. Therefore, the reduced beta-band responses during CPM may indicate changes in top-down pain modulations.
Collapse
Affiliation(s)
- Hyerang Jin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Bart Witjes
- Centre for Pain Medicine, Erasmus University Medical Centre, Rotterdam, the Netherlands
| | - Mathieu Roy
- Department of Psychology, McGill University, Montreal, Canada
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
| | - Cecile C. de Vos
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Canada
- Centre for Pain Medicine, Erasmus University Medical Centre, Rotterdam, the Netherlands
| |
Collapse
|
14
|
Dura-Bernal S, Griffith EY, Barczak A, O'Connell MN, McGinnis T, Moreira JVS, Schroeder CE, Lytton WW, Lakatos P, Neymotin SA. Data-driven multiscale model of macaque auditory thalamocortical circuits reproduces in vivo dynamics. Cell Rep 2023; 42:113378. [PMID: 37925640 PMCID: PMC10727489 DOI: 10.1016/j.celrep.2023.113378] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 09/05/2023] [Accepted: 10/19/2023] [Indexed: 11/07/2023] Open
Abstract
We developed a detailed model of macaque auditory thalamocortical circuits, including primary auditory cortex (A1), medial geniculate body (MGB), and thalamic reticular nucleus, utilizing the NEURON simulator and NetPyNE tool. The A1 model simulates a cortical column with over 12,000 neurons and 25 million synapses, incorporating data on cell-type-specific neuron densities, morphology, and connectivity across six cortical layers. It is reciprocally connected to the MGB thalamus, which includes interneurons and core and matrix-layer-specific projections to A1. The model simulates multiscale measures, including physiological firing rates, local field potentials (LFPs), current source densities (CSDs), and electroencephalography (EEG) signals. Laminar CSD patterns, during spontaneous activity and in response to broadband noise stimulus trains, mirror experimental findings. Physiological oscillations emerge spontaneously across frequency bands comparable to those recorded in vivo. We elucidate population-specific contributions to observed oscillation events and relate them to firing and presynaptic input patterns. The model offers a quantitative theoretical framework to integrate and interpret experimental data and predict its underlying cellular and circuit mechanisms.
Collapse
Affiliation(s)
- Salvador Dura-Bernal
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Erica Y Griffith
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA.
| | - Annamaria Barczak
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Monica N O'Connell
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Tammy McGinnis
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA
| | - Joao V S Moreira
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA
| | - Charles E Schroeder
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Departments of Psychiatry and Neurology, Columbia University Medical Center, New York, NY, USA
| | - William W Lytton
- Department of Physiology and Pharmacology, State University of New York (SUNY) Downstate Health Sciences University, Brooklyn, NY, USA; Kings County Hospital Center, Brooklyn, NY, USA
| | - Peter Lakatos
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA
| | - Samuel A Neymotin
- Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY, USA; Department Psychiatry, NYU Grossman School of Medicine, New York, NY, USA.
| |
Collapse
|
15
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
16
|
Sauer A, Grent-'t-Jong T, Zeev-Wolf M, Singer W, Goldstein A, Uhlhaas PJ. Spectral and phase-coherence correlates of impaired auditory mismatch negativity (MMN) in schizophrenia: A MEG study. Schizophr Res 2023; 261:60-71. [PMID: 37708723 DOI: 10.1016/j.schres.2023.08.033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Revised: 06/21/2023] [Accepted: 08/31/2023] [Indexed: 09/16/2023]
Abstract
BACKGROUND Reduced auditory mismatch negativity (MMN) is robustly impaired in schizophrenia. However, mechanisms underlying dysfunctional MMN generation remain incompletely understood. This study aimed to examine the role of evoked spectral power and phase-coherence towards deviance detection and its impairments in schizophrenia. METHODS Magnetoencephalography data was collected in 16 male schizophrenia patients and 16 male control participants during an auditory MMN paradigm. Analyses of event-related fields (ERF), spectral power and inter-trial phase-coherence (ITPC) focused on Heschl's gyrus, superior temporal gyrus, inferior/medial frontal gyrus and thalamus. RESULTS MMNm ERF amplitudes were reduced in patients in temporal, frontal and subcortical regions, accompanied by decreased theta-band responses, as well as by a diminished gamma-band response in auditory cortex. At theta/alpha frequencies, ITPC to deviant tones was reduced in patients in frontal cortex and thalamus. Patients were also characterized by aberrant responses to standard tones as indexed by reduced theta-/alpha-band power and ITPC in temporal and frontal regions. Moreover, stimulus-specific adaptation was decreased at theta/alpha frequencies in left temporal regions, which correlated with reduced MMNm spectral power and ERF amplitude. Finally, phase-reset of alpha-oscillations after deviant tones in left thalamus was impaired, which correlated with impaired MMNm generation in auditory cortex. Importantly, both non-rhythmic and rhythmic components of spectral activity contributed to the MMNm response. CONCLUSIONS Our data indicate that deficits in theta-/alpha- and gamma-band activity in cortical and subcortical regions as well as impaired spectral responses to standard sounds could constitute potential mechanisms for dysfunctional MMN generation in schizophrenia, providing a novel perspective towards MMN deficits in the disorder.
Collapse
Affiliation(s)
- Andreas Sauer
- Max Planck Institute for Brain Research, Max-von-Laue-Straße 4, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Deutschordenstr. 46, 60528 Frankfurt am Main, Germany
| | - Tineke Grent-'t-Jong
- Department of Child and Adolescent Psychiatry, Charité-Universitätsmedizin Berlin, Augustenburgerplatz 1, 13353 Berlin, Germany; Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, Scotland, United Kingdom of Great Britain and Northern Ireland
| | - Maor Zeev-Wolf
- Department of Education and Zlotowski Center for Neuroscience, Ben Gurion University of the Negev, Beer Sheva 84105, Israel; Gonda Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Wolf Singer
- Max Planck Institute for Brain Research, Max-von-Laue-Straße 4, 60438 Frankfurt am Main, Germany; Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, Deutschordenstr. 46, 60528 Frankfurt am Main, Germany; Frankfurt Institute for Advanced Studies (FIAS), Ruth-Moufang-Straße 1, 60438 Frankfurt am Main, Germany
| | - Abraham Goldstein
- Gonda Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Peter J Uhlhaas
- Department of Child and Adolescent Psychiatry, Charité-Universitätsmedizin Berlin, Augustenburgerplatz 1, 13353 Berlin, Germany; Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, G12 8QB Glasgow, Scotland, United Kingdom of Great Britain and Northern Ireland.
| |
Collapse
|
17
|
Wagner M, Rusiniak M, Higby E, Nourski KV. Sensory processing of native and non-native phonotactic patterns in the alpha and beta frequency bands. Neuropsychologia 2023; 189:108659. [PMID: 37579990 PMCID: PMC10602391 DOI: 10.1016/j.neuropsychologia.2023.108659] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 08/03/2023] [Accepted: 08/10/2023] [Indexed: 08/16/2023]
Abstract
The phonotactic patterns of one's native language are established within cortical network processing during development. Sensory processing of native language phonotactic patterns established in memory may be modulated by top-down signals within the alpha and beta frequency bands. To explore sensory processing of phonotactic patterns in the alpha and beta frequency bands, electroencephalograms (EEGs) were recorded from native Polish and native English-speaking adults as they listened to spoken nonwords within same and different nonword pairs. The nonwords contained three phonological sequence onsets that occur in the Polish and English languages (/pət/, /st/, /sət/) and one onset sequence /pt/, which occurs in Polish but not in English onsets. Source localization modeling was used to transform 64-channel EEGs into brain source-level channels. Spectral power values in the low frequencies (2-29 Hz) were analyzed in response to the first nonword in nonword pairs within the context of counterbalanced listening-task conditions, which were presented on separate testing days. For the with-task listening condition, participants performed a behavioral task to the second nonword in the pairs. For the without-task condition participants were only instructed to listen to the stimuli. Thus, in the with-task condition, the first nonword served as a cue for the second nonword, the target stimulus. The results revealed decreased spectral power in the beta frequency band for the with-task condition compared to the without-task condition in response to native language phonotactic patterns. In contrast, the task-related suppression effects in response to the non-native phonotactic pattern /pt/ for the English listeners extended into the alpha frequency band. These effects were localized to source channels in left auditory cortex, the left anterior temporal cortex and the occipital pole. This exploratory study revealed a pattern of results that, if replicated, suggests that native language speech perception is supported by modulations in the alpha and beta frequency bands.
Collapse
Affiliation(s)
- Monica Wagner
- St. John's University, 8000 Utopia Parkway, Queens, NY, 11439, USA.
| | | | - Eve Higby
- California State University, East Bay, 25800 Carlos Bee Blvd, Hayward, CA, 94542, USA.
| | - Kirill V Nourski
- The University of Iowa, 200 Hawkins Dr., Iowa City, IA, 52242, USA.
| |
Collapse
|
18
|
Wang X, Delgado J, Marchesotti S, Kojovic N, Sperdin HF, Rihs TA, Schaer M, Giraud AL. Speech Reception in Young Children with Autism Is Selectively Indexed by a Neural Oscillation Coupling Anomaly. J Neurosci 2023; 43:6779-6795. [PMID: 37607822 PMCID: PMC10552944 DOI: 10.1523/jneurosci.0112-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 07/02/2023] [Accepted: 07/07/2023] [Indexed: 08/24/2023] Open
Abstract
Communication difficulties are one of the core criteria in diagnosing autism spectrum disorder (ASD), and are often characterized by speech reception difficulties, whose biological underpinnings are not yet identified. This deficit could denote atypical neuronal ensemble activity, as reflected by neural oscillations. Atypical cross-frequency oscillation coupling, in particular, could disrupt the joint tracking and prediction of dynamic acoustic stimuli, a dual process that is essential for speech comprehension. Whether such oscillatory anomalies already exist in very young children with ASD, and with what specificity they relate to individual language reception capacity is unknown. We collected neural activity data using electroencephalography (EEG) in 64 very young children with and without ASD (mean age 3; 17 females, 47 males) while they were exposed to naturalistic-continuous speech. EEG power of frequency bands typically associated with phrase-level chunking (δ, 1-3 Hz), phonemic encoding (low-γ, 25-35 Hz), and top-down control (β, 12-20 Hz) were markedly reduced in ASD relative to typically developing (TD) children. Speech neural tracking by δ and θ (4-8 Hz) oscillations was also weaker in ASD compared with TD children. After controlling gaze-pattern differences, we found that the classical θ/γ coupling was replaced by an atypical β/γ coupling in children with ASD. This anomaly was the single most specific predictor of individual speech reception difficulties in ASD children. These findings suggest that early interventions (e.g., neurostimulation) targeting the disruption of β/γ coupling and the upregulation of θ/γ coupling could improve speech processing coordination in young children with ASD and help them engage in oral interactions.SIGNIFICANCE STATEMENT Very young children already present marked alterations of neural oscillatory activity in response to natural speech at the time of autism spectrum disorder (ASD) diagnosis. Hierarchical processing of phonemic-range and syllabic-range information (θ/γ coupling) is disrupted in ASD children. Abnormal bottom-up (low-γ) and top-down (low-β) coordination specifically predicts speech reception deficits in very young ASD children, and no other cognitive deficit.
Collapse
Affiliation(s)
- Xiaoyue Wang
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France, 75012
| | - Jaime Delgado
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Silvia Marchesotti
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Nada Kojovic
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Holger Franz Sperdin
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Tonia A Rihs
- Functional Brain Mapping Laboratory, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
| | - Marie Schaer
- Autism Brain & Behavior Lab, Department of Psychiatry, University of Geneva, Geneva, Switzerland, 1202
| | - Anne-Lise Giraud
- Auditory Language Group, Department of Basic Neuroscience, University of Geneva, Geneva, Switzerland, 1202
- Institut Pasteur, Université Paris Cité, Hearing Institute, Paris, France, 75012
| |
Collapse
|
19
|
Theriault JE, Shaffer C, Dienel GA, Sander CY, Hooker JM, Dickerson BC, Barrett LF, Quigley KS. A functional account of stimulation-based aerobic glycolysis and its role in interpreting BOLD signal intensity increases in neuroimaging experiments. Neurosci Biobehav Rev 2023; 153:105373. [PMID: 37634556 PMCID: PMC10591873 DOI: 10.1016/j.neubiorev.2023.105373] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 07/28/2023] [Accepted: 08/23/2023] [Indexed: 08/29/2023]
Abstract
In aerobic glycolysis, oxygen is abundant, and yet cells metabolize glucose without using it, decreasing their ATP per glucose yield by 15-fold. During task-based stimulation, aerobic glycolysis occurs in localized brain regions, presenting a puzzle: why produce ATP inefficiently when, all else being equal, evolution should favor the efficient use of metabolic resources? The answer is that all else is not equal. We propose that a tradeoff exists between efficient ATP production and the efficiency with which ATP is spent to transmit information. Aerobic glycolysis, despite yielding little ATP per glucose, may support neuronal signaling in thin (< 0.5 µm), information-efficient axons. We call this the efficiency tradeoff hypothesis. This tradeoff has potential implications for interpretations of task-related BOLD "activation" observed in fMRI. We hypothesize that BOLD "activation" may index local increases in aerobic glycolysis, which support signaling in thin axons carrying "bottom-up" information, or "prediction error"-i.e., the BIAPEM (BOLD increases approximate prediction error metabolism) hypothesis. Finally, we explore implications of our hypotheses for human brain evolution, social behavior, and mental disorders.
Collapse
Affiliation(s)
- Jordan E Theriault
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA.
| | - Clare Shaffer
- Northeastern University, Department of Psychology, Boston, MA, USA
| | - Gerald A Dienel
- Department of Neurology, University of Arkansas for Medical Sciences, Little Rock, AR, USA; Department of Cell Biology and Physiology, University of New Mexico, Albuquerque, NM, USA
| | - Christin Y Sander
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Jacob M Hooker
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Bradford C Dickerson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA; Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Lisa Feldman Barrett
- Northeastern University, Department of Psychology, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA; Department of Psychiatry, Massachusetts General Hospital and Harvard Medical School, Charlestown, MA, USA
| | - Karen S Quigley
- Northeastern University, Department of Psychology, Boston, MA, USA; VA Bedford Healthcare System, Bedford, MA, USA
| |
Collapse
|
20
|
Daikoku T, Kumagaya S, Ayaya S, Nagai Y. Non-autistic persons modulate their speech rhythm while talking to autistic individuals. PLoS One 2023; 18:e0285591. [PMID: 37768917 PMCID: PMC10538692 DOI: 10.1371/journal.pone.0285591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2022] [Accepted: 04/27/2023] [Indexed: 09/30/2023] Open
Abstract
How non-autistic persons modulate their speech rhythm while talking to autistic (AUT) individuals remains unclear. We investigated two types of phonological characteristics: (1) the frequency power of each prosodic, syllabic, and phonetic rhythm and (2) the dynamic interaction among these rhythms using speech between AUT and neurotypical (NT) individuals. Eight adults diagnosed with AUT (all men; age range, 24-44 years) and eight age-matched non-autistic NT adults (three women, five men; age range, 23-45 years) participated in this study. Six NT and eight AUT respondents were asked by one of the two NT questioners (both men) to share their recent experiences on 12 topics. We included 87 samples of AUT-directed speech (from an NT questioner to an AUT respondent), 72 of NT-directed speech (from an NT questioner to an NT respondent), 74 of AUT speech (from an AUT respondent to an NT questioner), and 55 of NT speech (from an NT respondent to an NT questioner). We found similarities between AUT speech and AUT-directed speech, and between NT speech and NT-directed speech. Prosody and interactions between prosodic, syllabic, and phonetic rhythms were significantly weaker in AUT-directed and AUT speech than in NT-directed and NT speech, respectively. AUT speech showed weaker dynamic processing from higher to lower phonological bands (e.g. from prosody to syllable) than NT speech. Further, we found that the weaker the frequency power of prosody in NT and AUT respondents, the weaker the frequency power of prosody in NT questioners. This suggests that NT individuals spontaneously imitate speech rhythms of the NT and AUT interlocutor. Although the speech sample of questioners came from just two NT individuals, our findings may suggest the possibility that the phonological characteristics of a speaker influence those of the interlocutor.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Shinichiro Kumagaya
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Satsuki Ayaya
- Research Center for Advanced Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Yukie Nagai
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
21
|
K A, Prasad S, Chakrabarty M. Trait anxiety modulates the detection sensitivity of negative affect in speech: an online pilot study. Front Behav Neurosci 2023; 17:1240043. [PMID: 37744950 PMCID: PMC10512416 DOI: 10.3389/fnbeh.2023.1240043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2023] [Accepted: 08/21/2023] [Indexed: 09/26/2023] Open
Abstract
Acoustic perception of emotions in speech is relevant for humans to navigate the social environment optimally. While sensory perception is known to be influenced by ambient noise, and bodily internal states (e.g., emotional arousal and anxiety), their relationship to human auditory perception is relatively less understood. In a supervised, online pilot experiment sans the artificially controlled laboratory environment, we asked if the detection sensitivity of emotions conveyed by human speech-in-noise (acoustic signals) varies between individuals with relatively lower and higher levels of subclinical trait-anxiety, respectively. In a task, participants (n = 28) accurately discriminated the target emotion conveyed by the temporally unpredictable acoustic signals (signal to noise ratio = 10 dB), which were manipulated at four levels (Happy, Neutral, Fear, and Disgust). We calculated the empirical area under the curve (a measure of acoustic signal detection sensitivity) based on signal detection theory to answer our questions. A subset of individuals with High trait-anxiety relative to Low in the above sample showed significantly lower detection sensitivities to acoustic signals of negative emotions - Disgust and Fear and significantly lower detection sensitivities to acoustic signals when averaged across all emotions. The results from this pilot study with a small but statistically relevant sample size suggest that trait-anxiety levels influence the overall acoustic detection of speech-in-noise, especially those conveying threatening/negative affect. The findings are relevant for future research on acoustic perception anomalies underlying affective traits and disorders.
Collapse
Affiliation(s)
- Achyuthanand K
- Department of Computational Biology, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Saurabh Prasad
- Department of Computer Science and Engineering, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| | - Mrinmoy Chakrabarty
- Department of Social Sciences and Humanities, Indraprastha Institute of Information Technology Delhi, New Delhi, India
- Centre for Design and New Media, Indraprastha Institute of Information Technology Delhi, New Delhi, India
| |
Collapse
|
22
|
Pei C, Huang X, Qiu Y, Peng Y, Gao S, Biswal B, Yao D, Liu Q, Li F, Xu P. Frequency-specific directed interactions between whole-brain regions during sentence processing using multimodal stimulus. Neurosci Lett 2023; 812:137409. [PMID: 37487970 DOI: 10.1016/j.neulet.2023.137409] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 06/26/2023] [Accepted: 07/20/2023] [Indexed: 07/26/2023]
Abstract
Neural oscillations subserve a broad range of speech processing and language comprehension functions. Using an electroencephalogram (EEG), we investigated the frequency-specific directed interactions between whole-brain regions while the participants processed Chinese sentences using different modality stimuli (i.e., auditory, visual, and audio-visual). The results indicate that low-frequency responses correspond to the process of information flow aggregation in primary sensory cortices in different modalities. Information flow dominated by high-frequency responses exhibited characteristics of bottom-up flow from left posterior temporal to left frontal regions. The network pattern of top-down information flowing out of the left frontal lobe was presented by the joint dominance of low- and high-frequency rhythms. Overall, our results suggest that the brain may be modality-independent when processing higher-order language information.
Collapse
Affiliation(s)
- Changfu Pei
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Xunan Huang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Foreign Languages, University of Electronic Science and Technology of China, Sichuan, Chengdu 611731, China
| | - Yuan Qiu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yueheng Peng
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Shan Gao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Foreign Languages, University of Electronic Science and Technology of China, Sichuan, Chengdu 611731, China
| | - Bharat Biswal
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China; Department of Biomedical Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Qiang Liu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, Sichuan, Chengdu 610066, China.
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu 611731, China.
| |
Collapse
|
23
|
Banks MI, Krause BM, Berger DG, Campbell DI, Boes AD, Bruss JE, Kovach CK, Kawasaki H, Steinschneider M, Nourski KV. Functional geometry of auditory cortical resting state networks derived from intracranial electrophysiology. PLoS Biol 2023; 21:e3002239. [PMID: 37651504 PMCID: PMC10499207 DOI: 10.1371/journal.pbio.3002239] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2022] [Revised: 09/13/2023] [Accepted: 07/07/2023] [Indexed: 09/02/2023] Open
Abstract
Understanding central auditory processing critically depends on defining underlying auditory cortical networks and their relationship to the rest of the brain. We addressed these questions using resting state functional connectivity derived from human intracranial electroencephalography. Mapping recording sites into a low-dimensional space where proximity represents functional similarity revealed a hierarchical organization. At a fine scale, a group of auditory cortical regions excluded several higher-order auditory areas and segregated maximally from the prefrontal cortex. On mesoscale, the proximity of limbic structures to the auditory cortex suggested a limbic stream that parallels the classically described ventral and dorsal auditory processing streams. Identities of global hubs in anterior temporal and cingulate cortex depended on frequency band, consistent with diverse roles in semantic and cognitive processing. On a macroscale, observed hemispheric asymmetries were not specific for speech and language networks. This approach can be applied to multivariate brain data with respect to development, behavior, and disorders.
Collapse
Affiliation(s)
- Matthew I. Banks
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
- Department of Neuroscience, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Bryan M. Krause
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - D. Graham Berger
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Declan I. Campbell
- Department of Anesthesiology, University of Wisconsin, Madison, Wisconsin, United States of America
| | - Aaron D. Boes
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Joel E. Bruss
- Department of Neurology, The University of Iowa, Iowa City, Iowa, United States of America
| | - Christopher K. Kovach
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
| | - Mitchell Steinschneider
- Department of Neurology, Albert Einstein College of Medicine, New York, New York, United States of America
- Department of Neuroscience, Albert Einstein College of Medicine, New York, New York, United States of America
| | - Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, Iowa, United States of America
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, Iowa, United States of America
| |
Collapse
|
24
|
Abbasi O, Steingräber N, Chalas N, Kluger DS, Gross J. Spatiotemporal dynamics characterise spectral connectivity profiles of continuous speaking and listening. PLoS Biol 2023; 21:e3002178. [PMID: 37478152 DOI: 10.1371/journal.pbio.3002178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/31/2023] [Indexed: 07/23/2023] Open
Abstract
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
25
|
Draganov M, Galiano-Landeira J, Doruk Camsari D, Ramírez JE, Robles M, Chanes L. Noninvasive modulation of predictive coding in humans: causal evidence for frequency-specific temporal dynamics. Cereb Cortex 2023:7156779. [PMID: 37154618 DOI: 10.1093/cercor/bhad127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Revised: 03/17/2023] [Accepted: 03/18/2023] [Indexed: 05/10/2023] Open
Abstract
Increasing evidence indicates that the brain predicts sensory input based on past experiences, importantly constraining how we experience the world. Despite a growing interest on this framework, known as predictive coding, most of such approaches to multiple psychological domains continue to be theoretical or primarily provide correlational evidence. We here explored the neural basis of predictive processing using noninvasive brain stimulation and provide causal evidence of frequency-specific modulations in humans. Participants received 20 Hz (associated with top-down/predictions), 50 Hz (associated with bottom-up/prediction errors), or sham transcranial alternating current stimulation on the left dorsolateral prefrontal cortex while performing a social perception task in which facial expression predictions were induced and subsequently confirmed or violated. Left prefrontal 20 Hz stimulation reinforced stereotypical predictions. In contrast, 50 Hz and sham stimulation failed to yield any significant behavioral effects. Moreover, the frequency-specific effect observed was further supported by electroencephalography data, which showed a boost of brain activity at the stimulated frequency band. These observations provide causal evidence for how predictive processing may be enabled in the human brain, setting up a needed framework to understand how it may be disrupted across brain-related conditions and potentially restored through noninvasive methods.
Collapse
Affiliation(s)
- Metodi Draganov
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
| | - Jordi Galiano-Landeira
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
| | - Deniz Doruk Camsari
- Department of Psychiatry and Psychology, Mayo Clinic, Rochester, MN 55905, United States
| | - Jairo-Enrique Ramírez
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
| | - Marta Robles
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
- Department of Psychiatry and Psychotherapy, Medical Faculty, LMU Munich, Munich 80336, Germany
| | - Lorena Chanes
- Department of Clinical and Health Psychology, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
- Institut de Neurociències, Universitat Autònoma de Barcelona, Barcelona 08193, Spain
- Serra Húnter Programme, Generalitat de Catalunya, Barcelona 08002, Spain
| |
Collapse
|
26
|
Kral A, Sharma A. Crossmodal plasticity in hearing loss. Trends Neurosci 2023; 46:377-393. [PMID: 36990952 PMCID: PMC10121905 DOI: 10.1016/j.tins.2023.02.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 01/27/2023] [Accepted: 02/21/2023] [Indexed: 03/29/2023]
Abstract
Crossmodal plasticity is a textbook example of the ability of the brain to reorganize based on use. We review evidence from the auditory system showing that such reorganization has significant limits, is dependent on pre-existing circuitry and top-down interactions, and that extensive reorganization is often absent. We argue that the evidence does not support the hypothesis that crossmodal reorganization is responsible for closing critical periods in deafness, and crossmodal plasticity instead represents a neuronal process that is dynamically adaptable. We evaluate the evidence for crossmodal changes in both developmental and adult-onset deafness, which start as early as mild-moderate hearing loss and show reversibility when hearing is restored. Finally, crossmodal plasticity does not appear to affect the neuronal preconditions for successful hearing restoration. Given its dynamic and versatile nature, we describe how this plasticity can be exploited for improving clinical outcomes after neurosensory restoration.
Collapse
Affiliation(s)
- Andrej Kral
- Institute of AudioNeuroTechnology and Department of Experimental Otology, Otolaryngology Clinics, Hannover Medical School, Hannover, Germany; Australian Hearing Hub, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| | - Anu Sharma
- Department of Speech Language and Hearing Science, Center for Neuroscience, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA.
| |
Collapse
|
27
|
Chalas N, Omigie D, Poeppel D, van Wassenhove V. Hierarchically nested networks optimize the analysis of audiovisual speech. iScience 2023; 26:106257. [PMID: 36909667 PMCID: PMC9993032 DOI: 10.1016/j.isci.2023.106257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 12/22/2022] [Accepted: 02/17/2023] [Indexed: 02/22/2023] Open
Abstract
In conversational settings, seeing the speaker's face elicits internal predictions about the upcoming acoustic utterance. Understanding how the listener's cortical dynamics tune to the temporal statistics of audiovisual (AV) speech is thus essential. Using magnetoencephalography, we explored how large-scale frequency-specific dynamics of human brain activity adapt to AV speech delays. First, we show that the amplitude of phase-locked responses parametrically decreases with natural AV speech synchrony, a pattern that is consistent with predictive coding. Second, we show that the temporal statistics of AV speech affect large-scale oscillatory networks at multiple spatial and temporal resolutions. We demonstrate a spatial nestedness of oscillatory networks during the processing of AV speech: these oscillatory hierarchies are such that high-frequency activity (beta, gamma) is contingent on the phase response of low-frequency (delta, theta) networks. Our findings suggest that the endogenous temporal multiplexing of speech processing confers adaptability within the temporal regimes that are essential for speech comprehension.
Collapse
Affiliation(s)
- Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, P.C., 48149 Münster, Germany
- CEA, DRF/Joliot, NeuroSpin, INSERM, Cognitive Neuroimaging Unit; CNRS; Université Paris-Saclay, 91191 Gif/Yvette, France
- School of Biology, Faculty of Sciences, Aristotle University of Thessaloniki, P.C., 54124 Thessaloniki, Greece
- Corresponding author
| | - Diana Omigie
- Department of Psychology, Goldsmiths University London, London, UK
| | - David Poeppel
- Department of Psychology, New York University, New York, NY 10003, USA
- Ernst Struengmann Institute for Neuroscience, 60528 Frankfurt am Main, Frankfurt, Germany
| | - Virginie van Wassenhove
- CEA, DRF/Joliot, NeuroSpin, INSERM, Cognitive Neuroimaging Unit; CNRS; Université Paris-Saclay, 91191 Gif/Yvette, France
- Corresponding author
| |
Collapse
|
28
|
Su Y, MacGregor LJ, Olasagasti I, Giraud AL. A deep hierarchy of predictions enables online meaning extraction in a computational model of human speech comprehension. PLoS Biol 2023; 21:e3002046. [PMID: 36947552 PMCID: PMC10079236 DOI: 10.1371/journal.pbio.3002046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Revised: 04/06/2023] [Accepted: 02/22/2023] [Indexed: 03/23/2023] Open
Abstract
Understanding speech requires mapping fleeting and often ambiguous soundwaves to meaning. While humans are known to exploit their capacity to contextualize to facilitate this process, how internal knowledge is deployed online remains an open question. Here, we present a model that extracts multiple levels of information from continuous speech online. The model applies linguistic and nonlinguistic knowledge to speech processing, by periodically generating top-down predictions and incorporating bottom-up incoming evidence in a nested temporal hierarchy. We show that a nonlinguistic context level provides semantic predictions informed by sensory inputs, which are crucial for disambiguating among multiple meanings of the same word. The explicit knowledge hierarchy of the model enables a more holistic account of the neurophysiological responses to speech compared to using lexical predictions generated by a neural network language model (GPT-2). We also show that hierarchical predictions reduce peripheral processing via minimizing uncertainty and prediction error. With this proof-of-concept model, we demonstrate that the deployment of hierarchical predictions is a possible strategy for the brain to dynamically utilize structured knowledge and make sense of the speech input.
Collapse
Affiliation(s)
- Yaqing Su
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Lucy J. MacGregor
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Itsaso Olasagasti
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Fundamental Neuroscience, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Swiss National Centre of Competence in Research “Evolving Language” (NCCR EvolvingLanguage), Geneva, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, Paris, France
| |
Collapse
|
29
|
Xu N, Zhao B, Luo L, Zhang K, Shao X, Luan G, Wang Q, Hu W, Wang Q. Two stages of speech envelope tracking in human auditory cortex modulated by speech intelligibility. Cereb Cortex 2023; 33:2215-2228. [PMID: 35695785 DOI: 10.1093/cercor/bhac203] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/01/2022] [Accepted: 05/02/2022] [Indexed: 11/13/2022] Open
Abstract
The envelope is essential for speech perception. Recent studies have shown that cortical activity can track the acoustic envelope. However, whether the tracking strength reflects the extent of speech intelligibility processing remains controversial. Here, using stereo-electroencephalogram technology, we directly recorded the activity in human auditory cortex while subjects listened to either natural or noise-vocoded speech. These 2 stimuli have approximately identical envelopes, but the noise-vocoded speech does not have speech intelligibility. According to the tracking lags, we revealed 2 stages of envelope tracking: an early high-γ (60-140 Hz) power stage that preferred the noise-vocoded speech and a late θ (4-8 Hz) phase stage that preferred the natural speech. Furthermore, the decoding performance of high-γ power was better in primary auditory cortex than in nonprimary auditory cortex, consistent with its short tracking delay, while θ phase showed better decoding performance in right auditory cortex. In addition, high-γ responses with sustained temporal profiles in nonprimary auditory cortex were dominant in both envelope tracking and decoding. In sum, we suggested a functional dissociation between high-γ power and θ phase: the former reflects fast and automatic processing of brief acoustic features, while the latter correlates to slow build-up processing facilitated by speech intelligibility.
Collapse
Affiliation(s)
- Na Xu
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,National Clinical Research Center for Neurological Diseases, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Baotian Zhao
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Lu Luo
- School of Psychology, Beijing Sport University, No. 48 Xinxi Road, Haidian District, Beijing 100084, China
| | - Kai Zhang
- Department of Neurosurgery, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Xiaoqiu Shao
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Guoming Luan
- Beijing Key Laboratory of Epilepsy, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, No. 50 Yikesong Xiangshan Road, Haidian District, Beijing 100093, China.,Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, No.10 Xitoutiao, You An Men, Beijing 100069, China
| | - Qian Wang
- Beijing Key Laboratory of Epilepsy, Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, No. 50 Yikesong Xiangshan Road, Haidian District, Beijing 100093, China.,School of Psychological and Cognitive Sciences, Beijing Key Laboratory of Behavior and Mental Health, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing 100871, China.,IDG/McGovern Institute for Brain Research, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing 100871, China
| | - Wenhan Hu
- Beijing Neurosurgical Institute, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,National Clinical Research Center for Neurological Diseases, No. 119 South Fourth Ring West Road, Fengtai District, Beijing 100070, China.,Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, No.10 Xitoutiao, You An Men, Beijing 100069, China
| |
Collapse
|
30
|
Togawa J, Matsumoto R, Usami K, Matsuhashi M, Inouchi M, Kobayashi K, Hitomi T, Nakae T, Shimotake A, Yamao Y, Kikuchi T, Yoshida K, Kunieda T, Miyamoto S, Takahashi R, Ikeda A. Enhanced phase-amplitude coupling of human electrocorticography selectively in the posterior cortical region during rapid eye movement sleep. Cereb Cortex 2022; 33:486-496. [PMID: 35288751 DOI: 10.1093/cercor/bhac079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 01/31/2022] [Accepted: 02/02/2022] [Indexed: 01/17/2023] Open
Abstract
The spatiotemporal dynamics of interaction between slow (delta or infraslow) waves and fast (gamma) activities during wakefulness and sleep are yet to be elucidated in human electrocorticography (ECoG). We evaluated phase-amplitude coupling (PAC), which reflects neuronal coding in information processing, using ECoG in 11 patients with intractable focal epilepsy. PAC was observed between slow waves of 0.5-0.6 Hz and gamma activities, not only during light sleep and slow-wave sleep (SWS) but even during wakefulness and rapid eye movement (REM) sleep. While PAC was high over a large region during SWS, it was stronger in the posterior cortical region around the temporoparietal junction than in the frontal cortical region during REM sleep. PAC tended to be higher in the posterior cortical region than in the frontal cortical region even during wakefulness. Our findings suggest that the posterior cortical region has a functional role in REM sleep and may contribute to the maintenance of the dreaming experience.
Collapse
Affiliation(s)
- Jumpei Togawa
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan.,Department of Respiratory Care and Sleep Control Medicine, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Riki Matsumoto
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan.,Divison of Neurology, Kobe University Graduate School of Medicine, Kobe 650-0017, Japan
| | - Kiyohide Usami
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Masao Matsuhashi
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Morito Inouchi
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan.,Department of Neurology, National Hospital Organization Kyoto Medical Center, Kyoto 612-8555, Japan
| | - Katsuya Kobayashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takefumi Hitomi
- Department of Clinical Laboratory Medicine, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takuro Nakae
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan.,Department of Neurosurgery, Shiga General Hospital, Moriyama, Shiga 524-8524, Japan
| | - Akihiro Shimotake
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Yukihiro Yamao
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takayuki Kikuchi
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Kazumichi Yoshida
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Takeharu Kunieda
- Department of Neurosurgery, Ehime University Graduate School of Medicine, To-on, Ehime 791-0295, Japan
| | - Susumu Miyamoto
- Department of Neurosurgery, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Ryosuke Takahashi
- Department of Neurology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| | - Akio Ikeda
- Department of Epilepsy, Movement Disorders and Physiology, Kyoto University Graduate School of Medicine, Kyoto 606-8507, Japan
| |
Collapse
|
31
|
Das A, Menon V. Replicable patterns of causal information flow between hippocampus and prefrontal cortex during spatial navigation and spatial-verbal memory formation. Cereb Cortex 2022; 32:5343-5361. [PMID: 35136979 PMCID: PMC9712747 DOI: 10.1093/cercor/bhac018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Revised: 01/11/2022] [Accepted: 01/13/2022] [Indexed: 12/27/2022] Open
Abstract
Interactions between the hippocampus and prefrontal cortex (PFC) play an essential role in both human spatial navigation and episodic memory, but the underlying causal flow of information between these regions across task domains is poorly understood. Here we use intracranial EEG recordings and spectrally resolved phase transfer entropy to investigate information flow during two different virtual spatial navigation and memory encoding/recall tasks and examine replicability of information flow patterns across spatial and verbal memory domains. Information theoretic analysis revealed a higher causal information flow from hippocampus to lateral PFC than in the reverse direction. Crucially, an asymmetric pattern of information flow was observed during memory encoding and recall periods of both spatial navigation tasks. Further analyses revealed frequency specificity of interactions characterized by greater bottom-up information flow from hippocampus to PFC in delta-theta band (0.5-8 Hz); in contrast, top-down information flow from PFC to hippocampus was stronger in beta band (12-30 Hz). Bayesian analysis revealed a high degree of replicability between the two spatial navigation tasks (Bayes factor > 5.46e+3) and across tasks spanning the spatial and verbal memory domains (Bayes factor > 7.32e+8). Our findings identify a domain-independent and replicable frequency-dependent feedback loop engaged during memory formation in the human brain.
Collapse
Affiliation(s)
- Anup Das
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, USA
| | - Vinod Menon
- Department of Psychiatry and Behavioral Sciences, Stanford University School of Medicine, Stanford, CA 94305, USA
- Department of Neurology and Neurological Sciences, Stanford University School of Medicine, Stanford, CA 94305, USA
- Stanford Neurosciences Institute, Stanford University School of Medicine, Stanford, CA 94305, USA
| |
Collapse
|
32
|
Daikoku T, Goswami U. Hierarchical amplitude modulation structures and rhythm patterns: Comparing Western musical genres, song, and nature sounds to Babytalk. PLoS One 2022; 17:e0275631. [PMID: 36240225 PMCID: PMC9565671 DOI: 10.1371/journal.pone.0275631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Accepted: 09/20/2022] [Indexed: 11/19/2022] Open
Abstract
Statistical learning of physical stimulus characteristics is important for the development of cognitive systems like language and music. Rhythm patterns are a core component of both systems, and rhythm is key to language acquisition by infants. Accordingly, the physical stimulus characteristics that yield speech rhythm in "Babytalk" may also describe the hierarchical rhythmic relationships that characterize human music and song. Computational modelling of the amplitude envelope of "Babytalk" (infant-directed speech, IDS) using a demodulation approach (Spectral-Amplitude Modulation Phase Hierarchy model, S-AMPH) can describe these characteristics. S-AMPH modelling of Babytalk has shown previously that bands of amplitude modulations (AMs) at different temporal rates and their phase relations help to create its structured inherent rhythms. Additionally, S-AMPH modelling of children's nursery rhymes shows that different rhythm patterns (trochaic, iambic, dactylic) depend on the phase relations between AM bands centred on ~2 Hz and ~5 Hz. The importance of these AM phase relations was confirmed via a second demodulation approach (PAD, Probabilistic Amplitude Demodulation). Here we apply both S-AMPH and PAD to demodulate the amplitude envelopes of Western musical genres and songs. Quasi-rhythmic and non-human sounds found in nature (birdsong, rain, wind) were utilized for control analyses. We expected that the physical stimulus characteristics in human music and song from an AM perspective would match those of IDS. Given prior speech-based analyses, we also expected that AM cycles derived from the modelling may identify musical units like crotchets, quavers and demi-quavers. Both models revealed an hierarchically-nested AM modulation structure for music and song, but not nature sounds. This AM modulation structure for music and song matched IDS. Both models also generated systematic AM cycles yielding musical units like crotchets and quavers. Both music and language are created by humans and shaped by culture. Acoustic rhythm in IDS and music appears to depend on many of the same physical characteristics, facilitating learning.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
- International Research Center for Neurointelligence, The University of Tokyo, Bunkyo City, Tokyo, Japan
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
- * E-mail:
| | - Usha Goswami
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
33
|
Chao ZC, Huang YT, Wu CT. A quantitative model reveals a frequency ordering of prediction and prediction-error signals in the human brain. Commun Biol 2022; 5:1076. [PMID: 36216885 PMCID: PMC9550773 DOI: 10.1038/s42003-022-04049-6] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2022] [Accepted: 09/29/2022] [Indexed: 11/29/2022] Open
Abstract
The human brain is proposed to harbor a hierarchical predictive coding neuronal network underlying perception, cognition, and action. In support of this theory, feedforward signals for prediction error have been reported. However, the identification of feedback prediction signals has been elusive due to their causal entanglement with prediction-error signals. Here, we use a quantitative model to decompose these signals in electroencephalography during an auditory task, and identify their spatio-spectral-temporal signatures across two functional hierarchies. Two prediction signals are identified in the period prior to the sensory input: a low-level signal representing the tone-to-tone transition in the high beta frequency band, and a high-level signal for the multi-tone sequence structure in the low beta band. Subsequently, prediction-error signals dependent on the prior predictions are found in the gamma band. Our findings reveal a frequency ordering of prediction signals and their hierarchical interactions with prediction-error signals supporting predictive coding theory.
Collapse
Affiliation(s)
- Zenas C Chao
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan.
| | - Yiyuan Teresa Huang
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| | - Chien-Te Wu
- International Research Center for Neurointelligence (WPI-IRCN), UTIAS, The University of Tokyo, Tokyo, Japan
- School of Occupational Therapy, College of Medicine, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
34
|
Rupp K, Hect JL, Remick M, Ghuman A, Chandrasekaran B, Holt LL, Abel TJ. Neural responses in human superior temporal cortex support coding of voice representations. PLoS Biol 2022; 20:e3001675. [PMID: 35900975 PMCID: PMC9333263 DOI: 10.1371/journal.pbio.3001675] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Accepted: 05/13/2022] [Indexed: 11/19/2022] Open
Abstract
The ability to recognize abstract features of voice during auditory perception is an intricate feat of human audition. For the listener, this occurs in near-automatic fashion to seamlessly extract complex cues from a highly variable auditory signal. Voice perception depends on specialized regions of auditory cortex, including superior temporal gyrus (STG) and superior temporal sulcus (STS). However, the nature of voice encoding at the cortical level remains poorly understood. We leverage intracerebral recordings across human auditory cortex during presentation of voice and nonvoice acoustic stimuli to examine voice encoding at the cortical level in 8 patient-participants undergoing epilepsy surgery evaluation. We show that voice selectivity increases along the auditory hierarchy from supratemporal plane (STP) to the STG and STS. Results show accurate decoding of vocalizations from human auditory cortical activity even in the complete absence of linguistic content. These findings show an early, less-selective temporal window of neural activity in the STG and STS followed by a sustained, strongly voice-selective window. Encoding models demonstrate divergence in the encoding of acoustic features along the auditory hierarchy, wherein STG/STS responses are best explained by voice category and acoustics, as opposed to acoustic features of voice stimuli alone. This is in contrast to neural activity recorded from STP, in which responses were accounted for by acoustic features. These findings support a model of voice perception that engages categorical encoding mechanisms within STG and STS to facilitate feature extraction. Voice perception occurs via specialized networks in higher order auditory cortex, but how voice features are encoded remains a central unanswered question. Using human intracerebral recordings of auditory cortex, this study provides evidence for categorical encoding of voice.
Collapse
Affiliation(s)
- Kyle Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Madison Remick
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Avniel Ghuman
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Lori L. Holt
- Department of Psychology, Carnegie Mellon University, Pittsburgh, Pennsylvania, United States of America
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- * E-mail:
| |
Collapse
|
35
|
Hayat H, Marmelshtein A, Krom AJ, Sela Y, Tankus A, Strauss I, Fahoum F, Fried I, Nir Y. Reduced neural feedback signaling despite robust neuron and gamma auditory responses during human sleep. Nat Neurosci 2022; 25:935-943. [PMID: 35817847 PMCID: PMC9276533 DOI: 10.1038/s41593-022-01107-4] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2021] [Accepted: 05/23/2022] [Indexed: 02/02/2023]
Abstract
During sleep, sensory stimuli rarely trigger a behavioral response or conscious perception. However, it remains unclear whether sleep inhibits specific aspects of sensory processing, such as feedforward or feedback signaling. Here, we presented auditory stimuli (for example, click-trains, words, music) during wakefulness and sleep in patients with epilepsy, while recording neuronal spiking, microwire local field potentials, intracranial electroencephalogram and polysomnography. Auditory stimuli induced robust and selective spiking and high-gamma (80-200 Hz) power responses across the lateral temporal lobe during both non-rapid eye movement (NREM) and rapid eye movement (REM) sleep. Sleep only moderately attenuated response magnitudes, mainly affecting late responses beyond early auditory cortex and entrainment to rapid click-trains in NREM sleep. By contrast, auditory-induced alpha-beta (10-30 Hz) desynchronization (that is, decreased power), prevalent in wakefulness, was strongly reduced in sleep. Thus, extensive auditory responses persist during sleep whereas alpha-beta power decrease, likely reflecting neural feedback processes, is deficient. More broadly, our findings suggest that feedback signaling is key to conscious sensory processing.
Collapse
Affiliation(s)
- Hanna Hayat
- Department of Physiology and Pharmacology, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
| | | | - Aaron J Krom
- Department of Physiology and Pharmacology, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel
- Department of Anesthesiology and Critical Care Medicine, Hadassah-Hebrew University Medical Center, Faculty of Medicine, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Yaniv Sela
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Ariel Tankus
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Ido Strauss
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Firas Fahoum
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- EEG and Epilepsy Unit, Department of Neurology, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Itzhak Fried
- Functional Neurosurgery Unit, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.
- Department of Neurosurgery, University of California Los Angeles, Los Angeles, CA, USA.
| | - Yuval Nir
- Department of Physiology and Pharmacology, Sackler School of Medicine, Tel Aviv University, Tel Aviv, Israel.
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.
- Department of Biomedical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv, Israel.
- The Sieratzki-Sagol Center for Sleep Medicine, Tel-Aviv Sourasky Medical Center, Tel-Aviv, Israel.
| |
Collapse
|
36
|
Samiee S, Vuvan D, Florin E, Albouy P, Peretz I, Baillet S. Cross-Frequency Brain Network Dynamics Support Pitch Change Detection. J Neurosci 2022; 42:3823-3835. [PMID: 35351829 PMCID: PMC9087716 DOI: 10.1523/jneurosci.0630-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 01/18/2022] [Accepted: 01/22/2022] [Indexed: 11/21/2022] Open
Abstract
Processing auditory sequences involves multiple brain networks and is crucial to complex perception associated with music appreciation and speech comprehension. We used time-resolved cortical imaging in a pitch change detection task to detail the underlying nature of human brain network activity, at the rapid time scales of neurophysiology. In response to tone sequence presentation to the participants, we observed slow inter-regional signaling at the pace of tone presentations (2-4 Hz) that was directed from auditory cortex toward both inferior frontal and motor cortices. Symmetrically, motor cortex manifested directed influence onto auditory and inferior frontal cortices via bursts of faster (15-35 Hz) activity. These bursts occurred precisely at the expected latencies of each tone in a sequence. This expression of interdependency between slow/fast neurophysiological activity yielded a form of local cross-frequency phase-amplitude coupling in auditory cortex, which strength varied dynamically and peaked when pitch changes were anticipated. We clarified the mechanistic relevance of these observations in relation to behavior by including a group of individuals afflicted by congenital amusia, as a model of altered function in processing sound sequences. In amusia, we found a depression of inter-regional slow signaling toward motor and inferior frontal cortices, and a chronic overexpression of slow/fast phase-amplitude coupling in auditory cortex. These observations are compatible with a misalignment between the respective neurophysiological mechanisms of stimulus encoding and internal predictive signaling, which was absent in controls. In summary, our study provides a functional and mechanistic account of neurophysiological activity for predictive, sequential timing of auditory inputs.SIGNIFICANCE STATEMENT Auditory sequences are processed by extensive brain networks, involving multiple systems. In particular, fronto-temporal brain connections participate in the encoding of sequential auditory events, but so far, their study was limited to static depictions. This study details the nature of oscillatory brain activity involved in these inter-regional interactions in human participants. It demonstrates how directed, polyrhythmic oscillatory interactions between auditory and motor cortical regions provide a functional account for predictive timing of incoming items in an auditory sequence. In addition, we show the functional relevance of these observations in relation to behavior, with data from both normal hearing participants and a rare cohort of individuals afflicted by congenital amusia, which we considered here as a model of altered function in processing sound sequences.
Collapse
Affiliation(s)
- Soheila Samiee
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- Mila, Quebec AI Institute, Montreal, Quebec H2S 3H1, Canada
| | - Dominique Vuvan
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
- Psychology Department, Skidmore College, Saratoga Springs, New York 12866
| | - Esther Florin
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- Institute of Clinical Neuroscience and Medical Psychology, Medical Faculty, Heinrich Heine University, Düsseldorf 40225, Germany
| | - Philippe Albouy
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
- Psychology Department, CERVO brain research Center, Laval University, Montreal, Quebec G1V 0A6, Canada
| | - Isabelle Peretz
- International Laboratory for Brain, Music, and Sound Research, University of Montreal, Montreal, Quebec H3C 3J7, Canada
| | - Sylvain Baillet
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montreal, Quebec H3A2B4, Canada
| |
Collapse
|
37
|
Munoz Musat E, Rohaut B, Sangare A, Benhaiem JM, Naccache L. Hypnotic Induction of Deafness to Elementary Sounds: An Electroencephalography Case-Study and a Proposed Cognitive and Neural Scenario. Front Neurosci 2022; 16:756651. [PMID: 35368254 PMCID: PMC8969744 DOI: 10.3389/fnins.2022.756651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Accepted: 01/17/2022] [Indexed: 11/13/2022] Open
Abstract
Hypnosis can be conceived as a unique opportunity to explore how top-down effects can influence various conscious and non-conscious processes. In the field of perception, such modulatory effects have been described in distinct sensory modalities. In the present study we focused on the auditory channel and aimed at creating a radical deafness to elementary sounds by a specific hypnotic suggestion. We report here a single case-study in a highly suggestible healthy volunteer who reported a total hypnotically suggested deafness. We recorded high-density scalp EEG during an auditory odd-ball paradigm before and after hypnotic deafness suggestion. While both early auditory event-related potentials to sounds (P1) and mismatch negativity component were not affected by hypnotic deafness, we observed a total disappearance of the late P3 complex component when the subject reported being deaf. Moreover, a centro-mesial positivity was present exclusively during the hypnotic condition prior to the P3 complex. Interestingly, source localization suggested an anterior cingulate cortex (ACC) origin of this neural event. Multivariate decoding analyses confirmed and specified these findings. Resting state analyses confirmed a similar level of conscious state in both conditions, and suggested a functional disconnection between auditory areas and other cortical areas. Taken together these results suggest the following plausible scenario: (i) preserved early processing of auditory information unaffected by hypnotic suggestion, (ii) conscious setting of an inhibitory process (ACC) preventing conscious access to sounds, (iii) functional disconnection between the modular and unconscious representations of sounds and global neuronal workspace. This single subject study presents several limits that are discussed and remains open to alternative interpretations. This original proof-of-concept paves the way to a larger study that will test the predictions stemming from our theoretical model and from this first report.
Collapse
Affiliation(s)
- Esteban Munoz Musat
- INSERM U1127, CNRS 7225, Paris Brain Institute, Paris, France
- Sorbonne Université, Paris, France
- *Correspondence: Esteban Munoz Musat, ,
| | - Benjamin Rohaut
- INSERM U1127, CNRS 7225, Paris Brain Institute, Paris, France
- Sorbonne Université, Paris, France
- Department of Neurology, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique–Hôpitaux de Paris, Paris, France
| | - Aude Sangare
- INSERM U1127, CNRS 7225, Paris Brain Institute, Paris, France
- Sorbonne Université, Paris, France
| | | | - Lionel Naccache
- INSERM U1127, CNRS 7225, Paris Brain Institute, Paris, France
- Sorbonne Université, Paris, France
- Department of Neurophysiology, Groupe Hospitalier Pitié-Salpêtrière, Assistance Publique–Hôpitaux de Paris, Paris, France
- Lionel Naccache,
| |
Collapse
|
38
|
Corcoran AW, Perera R, Koroma M, Kouider S, Hohwy J, Andrillon T. Expectations boost the reconstruction of auditory features from electrophysiological responses to noisy speech. Cereb Cortex 2022; 33:691-708. [PMID: 35253871 PMCID: PMC9890472 DOI: 10.1093/cercor/bhac094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Corresponding author: Room E672, 20 Chancellors Walk, Clayton, VIC 3800, Australia.
| | - Ricardo Perera
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Matthieu Koroma
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Sid Kouider
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Jakob Hohwy
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia,Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Thomas Andrillon
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia,Paris Brain Institute, Sorbonne Université, Inserm-CNRS, Paris 75013, France
| |
Collapse
|
39
|
Monahan PJ, Schertz J, Fu Z, Pérez A. Unified Coding of Spectral and Temporal Phonetic Cues: Electrophysiological Evidence for Abstract Phonological Features. J Cogn Neurosci 2022; 34:618-638. [DOI: 10.1162/jocn_a_01817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Spoken word recognition models and phonological theory propose that abstract features play a central role in speech processing. It remains unknown, however, whether auditory cortex encodes linguistic features in a manner beyond the phonetic properties of the speech sounds themselves. We took advantage of the fact that English phonology functionally codes stops and fricatives as voiced or voiceless with two distinct phonetic cues: Fricatives use a spectral cue, whereas stops use a temporal cue. Evidence that these cues can be grouped together would indicate the disjunctive coding of distinct phonetic cues into a functionally defined abstract phonological feature. In English, the voicing feature, which distinguishes the consonants [s] and [t] from [z] and [d], respectively, is hypothesized to be specified only for voiceless consonants (e.g., [s t]). Here, participants listened to syllables in a many-to-one oddball design, while their EEG was recorded. In one block, both voiceless stops and fricatives were the standards. In the other block, both voiced stops and fricatives were the standards. A critical design element was the presence of intercategory variation within the standards. Therefore, a many-to-one relationship, which is necessary to elicit an MMN, existed only if the stop and fricative standards were grouped together. In addition to the ERPs, event-related spectral power was also analyzed. Results showed an MMN effect in the voiceless standards block—an asymmetric MMN—in a time window consistent with processing in auditory cortex, as well as increased prestimulus beta-band oscillatory power to voiceless standards. These findings suggest that (i) there is an auditory memory trace of the standards based on the shared (voiceless) feature, which is only functionally defined; (ii) voiced consonants are underspecified; and (iii) features can serve as a basis for predictive processing. Taken together, these results point toward auditory cortex's ability to functionally code distinct phonetic cues together and suggest that abstract features can be used to parse the continuous acoustic signal.
Collapse
Affiliation(s)
| | | | - Zhanao Fu
- Cambridge University, United Kingdom
| | - Alejandro Pérez
- University of Toronto Scarborough, Ontario, Canada
- Cambridge University, United Kingdom
| |
Collapse
|
40
|
Proix T, Delgado Saa J, Christen A, Martin S, Pasley BN, Knight RT, Tian X, Poeppel D, Doyle WK, Devinsky O, Arnal LH, Mégevand P, Giraud AL. Imagined speech can be decoded from low- and cross-frequency intracranial EEG features. Nat Commun 2022; 13:48. [PMID: 35013268 PMCID: PMC8748882 DOI: 10.1038/s41467-021-27725-3] [Citation(s) in RCA: 36] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 12/03/2021] [Indexed: 01/19/2023] Open
Abstract
Reconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.
Collapse
Affiliation(s)
- Timothée Proix
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland.
| | - Jaime Delgado Saa
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Andy Christen
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Stephanie Martin
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| | - Brian N Pasley
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, Berkeley, USA
- Department of Psychology, University of California, Berkeley, Berkeley, USA
| | - Xing Tian
- Division of Arts and Sciences, New York University Shanghai, Shanghai, China
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
- NYU-ECNU Institute of Brain and Cognitive Science at NYU Shanghai, Shanghai, China
| | - David Poeppel
- Department of Psychology, New York University, New York, NY, USA
- Ernst Strüngmann Institute for Neuroscience, Frankfurt, Germany
| | - Werner K Doyle
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Orrin Devinsky
- Department of Neurology, New York University Grossman School of Medicine, New York, NY, USA
| | - Luc H Arnal
- Institut de l'Audition, Institut Pasteur, INSERM, F-75012, Paris, France
| | - Pierre Mégevand
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
- Division of Neurology, Geneva University Hospitals, Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, Faculty of Medicine, University of Geneva, Geneva, Switzerland
| |
Collapse
|
41
|
Ongoing neural oscillations influence behavior and sensory representations by suppressing neuronal excitability. Neuroimage 2021; 247:118746. [PMID: 34875382 DOI: 10.1016/j.neuroimage.2021.118746] [Citation(s) in RCA: 30] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 10/21/2021] [Accepted: 11/19/2021] [Indexed: 12/28/2022] Open
Abstract
The ability to process and respond to external input is critical for adaptive behavior. Why, then, do neural and behavioral responses vary across repeated presentations of the same sensory input? Ongoing fluctuations of neuronal excitability are currently hypothesized to underlie the trial-by-trial variability in sensory processing. To test this, we capitalized on intracranial electrophysiology in neurosurgical patients performing an auditory discrimination task with visual cues: specifically, we examined the interaction between prestimulus alpha oscillations, excitability, task performance, and decoded neural stimulus representations. We found that strong prestimulus oscillations in the alpha+ band (i.e., alpha and neighboring frequencies), rather than the aperiodic signal, correlated with a low excitability state, indexed by reduced broadband high-frequency activity. This state was related to slower reaction times and reduced neural stimulus encoding strength. We propose that the alpha+ rhythm modulates excitability, thereby resulting in variability in behavior and sensory representations despite identical input.
Collapse
|
42
|
Tarasi L, Trajkovic J, Diciotti S, di Pellegrino G, Ferri F, Ursino M, Romei V. Predictive waves in the autism-schizophrenia continuum: A novel biobehavioral model. Neurosci Biobehav Rev 2021; 132:1-22. [PMID: 34774901 DOI: 10.1016/j.neubiorev.2021.11.006] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2021] [Revised: 10/29/2021] [Accepted: 11/07/2021] [Indexed: 12/14/2022]
Abstract
The brain is a predictive machine. Converging data suggests a diametric predictive strategy from autism spectrum disorders (ASD) to schizophrenic spectrum disorders (SSD). Whereas perceptual inference in ASD is rigidly shaped by incoming sensory information, the SSD population is prone to overestimate the precision of their priors' models. Growing evidence considers brain oscillations pivotal biomarkers to understand how top-down predictions integrate bottom-up input. Starting from the conceptualization of ASD and SSD as oscillopathies, we introduce an integrated perspective that ascribes the maladjustments of the predictive mechanism to dysregulation of neural synchronization. According to this proposal, disturbances in the oscillatory profile do not allow the appropriate trade-off between descending predictive signal, overweighted in SSD, and ascending prediction errors, overweighted in ASD. These opposing imbalances both result in an ill-adapted reaction to external challenges. This approach offers a neuro-computational model capable of linking predictive coding theories with electrophysiological findings, aiming to increase knowledge on the neuronal foundations of the two spectra features and stimulate hypothesis-driven rehabilitation/research perspectives.
Collapse
Affiliation(s)
- Luca Tarasi
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy.
| | - Jelena Trajkovic
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy
| | - Stefano Diciotti
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy; Alma Mater Research Institute for Human-Centered Artificial Intelligence, University of Bologna, Bologna, Italy
| | - Giuseppe di Pellegrino
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy
| | - Francesca Ferri
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Mauro Ursino
- Department of Electrical, Electronic, and Information Engineering "Guglielmo Marconi", University of Bologna, Cesena, Italy
| | - Vincenzo Romei
- Centro Studi e Ricerche in Neuroscienze Cognitive, Dipartimento di Psicologia, Alma Mater Studiorum - Università di Bologna, Campus di Cesena, 47521 Cesena, Italy; IRCCS Fondazione Santa Lucia, 00179 Rome, Italy.
| |
Collapse
|
43
|
Leicht G, Björklund J, Vauth S, Mußmann M, Haaf M, Steinmann S, Rauh J, Mulert C. Gamma-band synchronisation in a frontotemporal auditory information processing network. Neuroimage 2021; 239:118307. [PMID: 34174389 DOI: 10.1016/j.neuroimage.2021.118307] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 05/25/2021] [Accepted: 06/23/2021] [Indexed: 01/22/2023] Open
Abstract
Neural oscillations are fundamental mechanisms of the human brain that enable coordinated activity of different brain regions during perceptual and cognitive processes. A frontotemporal network generated by means of gamma oscillations and comprising the auditory cortex (AC) and the anterior cingulate cortex (ACC) has been shown to be involved in the cognitively demanding auditory information processing. This study aims to reveal patterns of functional and effective connectivity within this network in healthy subjects by means of simultaneously recorded electroencephalography (EEG) and functional magnetic resonance imaging (fMRI). We simultaneously recorded EEG and fMRI in 28 healthy subjects during the performance of a cognitively demanding auditory choice reaction task. Connectivity between the ACC and AC was analysed employing EEG and fMRI connectivity measures. We found a significant BOLD signal correlation between the ACC and AC, a significant task-dependant increase of fMRI connectivity (gPPI) and a significant increase in functional coupling in the gamma frequency range between these regions (LPS), which was increased in top-down direction (granger analysis). EEG and fMRI connectivity measures were positively correlated. The results of these study point to a role of a top-down influence of the ACC on the AC executed by means of gamma synchronisation. The replication of fMRI connectivity patterns in simultaneously recorded EEG data and the correlation between connectivity measures from both domains found in our study show, that brain connectivity based on the synchronisation of gamma oscillations is mirrored in fMRI connectivity patterns.
Collapse
Affiliation(s)
- Gregor Leicht
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany.
| | - Jonas Björklund
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Sebastian Vauth
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Marius Mußmann
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Moritz Haaf
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Saskia Steinmann
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Jonas Rauh
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany
| | - Christoph Mulert
- Department of Psychiatry and Psychotherapy, Psychiatry Neuroimaging Branch (PNB), University Medical Center Hamburg-Eppendorf, Martinistr. 52, Hamburg D-20246, Germany; Center of Psychiatry, Justus-Liebig University, Giessen, Germany
| |
Collapse
|
44
|
Yuan P, Hu R, Zhang X, Wang Y, Jiang Y. Cortical entrainment to hierarchical contextual rhythms recomposes dynamic attending in visual perception. eLife 2021; 10:65118. [PMID: 34086558 PMCID: PMC8177885 DOI: 10.7554/elife.65118] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Accepted: 05/12/2021] [Indexed: 02/07/2023] Open
Abstract
Temporal regularity is ubiquitous and essential to guiding attention and coordinating behavior within a dynamic environment. Previous researchers have modeled attention as an internal rhythm that may entrain to first-order regularity from rhythmic events to prioritize information selection at specific time points. Using the attentional blink paradigm, here we show that higher-order regularity based on rhythmic organization of contextual features (pitch, color, or motion) may serve as a temporal frame to recompose the dynamic profile of visual temporal attention. Critically, such attentional reframing effect is well predicted by cortical entrainment to the higher-order contextual structure at the delta band as well as its coupling with the stimulus-driven alpha power. These results suggest that the human brain involuntarily exploits multiscale regularities in rhythmic contexts to recompose dynamic attending in visual perception, and highlight neural entrainment as a central mechanism for optimizing our conscious experience of the world in the time dimension.
Collapse
Affiliation(s)
- Peijun Yuan
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Ruichen Hu
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Xue Zhang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Ying Wang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China
| | - Yi Jiang
- State Key Laboratory of Brain and Cognitive Science, CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,Chinese Institute for Brain Research, Beijing, China.,Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei, China
| |
Collapse
|
45
|
Representational Content of Oscillatory Brain Activity during Object Recognition: Contrasting Cortical and Deep Neural Network Hierarchies. eNeuro 2021; 8:ENEURO.0362-20.2021. [PMID: 33903182 PMCID: PMC8152371 DOI: 10.1523/eneuro.0362-20.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 04/14/2021] [Accepted: 04/15/2021] [Indexed: 11/21/2022] Open
Abstract
Numerous theories propose a key role for brain oscillations in visual perception. Most of these theories postulate that sensory information is encoded in specific oscillatory components (e.g., power or phase) of specific frequency bands. These theories are often tested with whole-brain recording methods of low spatial resolution (EEG or MEG), or depth recordings that provide a local, incomplete view of the brain. Opportunities to bridge the gap between local neural populations and whole-brain signals are rare. Here, using representational similarity analysis (RSA) in human participants we explore which MEG oscillatory components (power and phase, across various frequency bands) correspond to low or high-level visual object representations, using brain representations from fMRI, or layer-wise representations in seven recent deep neural networks (DNNs), as a template for low/high-level object representations. The results showed that around stimulus onset and offset, most transient oscillatory signals correlated with low-level brain patterns (V1). During stimulus presentation, sustained β (∼20 Hz) and γ (>60 Hz) power best correlated with V1, while oscillatory phase components correlated with IT representations. Surprisingly, this pattern of results did not always correspond to low-level or high-level DNN layer activity. In particular, sustained β band oscillatory power reflected high-level DNN layers, suggestive of a feed-back component. These results begin to bridge the gap between whole-brain oscillatory signals and object representations supported by local neuronal activations.
Collapse
|
46
|
Jelinčić V, Van Diest I, Torta DM, von Leupoldt A. The breathing brain: The potential of neural oscillations for the understanding of respiratory perception in health and disease. Psychophysiology 2021; 59:e13844. [PMID: 34009644 DOI: 10.1111/psyp.13844] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/21/2021] [Accepted: 04/23/2021] [Indexed: 11/30/2022]
Abstract
Dyspnea or breathlessness is a symptom occurring in multiple acute and chronic illnesses, however, the understanding of the neural mechanisms underlying its subjective experience is limited. In this topical review, we propose neural oscillatory dynamics and cross-frequency coupling as viable candidates for a neural mechanism underlying respiratory perception, and a technique warranting more attention in respiration research. With the evidence for the potential of neural oscillations in the study of normal and disordered breathing coming from disparate research fields with a limited history of interdisciplinary collaboration, the main objective of the review was to converge the existing research and suggest future directions. The existing findings show that distinct limbic and cortical activations, as measured by hemodynamic responses, underlie dyspnea, however, the time-scale of these activations is not well understood. The recent findings of oscillatory neural activity coupled with the respiratory rhythm could provide the solution to this problem, however, more research with a focus on dyspnea is needed. We also touch on the findings of distinct spectral patterns underlying the changes in breathing due to experimental manipulations, meditation and disease. Subsequently, we suggest general research directions and specific research designs to supplement the current knowledge using neural oscillation techniques. We argue for the benefits of interdisciplinary collaboration and the converging of neuroimaging and behavioral methods to best explain the emergence of the subjective and aversive individual experience of dyspnea.
Collapse
Affiliation(s)
- Valentina Jelinčić
- Research Group Health Psychology, Department of Psychology, KU Leuven, Leuven, Belgium
| | - Ilse Van Diest
- Research Group Health Psychology, Department of Psychology, KU Leuven, Leuven, Belgium
| | - Diana M Torta
- Research Group Health Psychology, Department of Psychology, KU Leuven, Leuven, Belgium
| | - Andreas von Leupoldt
- Research Group Health Psychology, Department of Psychology, KU Leuven, Leuven, Belgium
| |
Collapse
|
47
|
Preisig BC, Riecke L, Sjerps MJ, Kösem A, Kop BR, Bramson B, Hagoort P, Hervais-Adelman A. Selective modulation of interhemispheric connectivity by transcranial alternating current stimulation influences binaural integration. Proc Natl Acad Sci U S A 2021; 118:e2015488118. [PMID: 33568530 PMCID: PMC7896308 DOI: 10.1073/pnas.2015488118] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Brain connectivity plays a major role in the encoding, transfer, and integration of sensory information. Interregional synchronization of neural oscillations in the γ-frequency band has been suggested as a key mechanism underlying perceptual integration. In a recent study, we found evidence for this hypothesis showing that the modulation of interhemispheric oscillatory synchrony by means of bihemispheric high-density transcranial alternating current stimulation (HD-TACS) affects binaural integration of dichotic acoustic features. Here, we aimed to establish a direct link between oscillatory synchrony, effective brain connectivity, and binaural integration. We experimentally manipulated oscillatory synchrony (using bihemispheric γ-TACS with different interhemispheric phase lags) and assessed the effect on effective brain connectivity and binaural integration (as measured with functional MRI and a dichotic listening task, respectively). We found that TACS reduced intrahemispheric connectivity within the auditory cortices and antiphase (interhemispheric phase lag 180°) TACS modulated connectivity between the two auditory cortices. Importantly, the changes in intra- and interhemispheric connectivity induced by TACS were correlated with changes in perceptual integration. Our results indicate that γ-band synchronization between the two auditory cortices plays a functional role in binaural integration, supporting the proposed role of interregional oscillatory synchrony in perceptual integration.
Collapse
Affiliation(s)
- Basil C Preisig
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands;
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland
| | - Lars Riecke
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6229 GT Maastricht, The Netherlands
| | - Matthias J Sjerps
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Anne Kösem
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
- Lyon Neuroscience Research Center, Cognition Computation and Neurophysiology Team, Université Claude Bernard Lyon 1, 69500 Bron, France
| | - Benjamin R Kop
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| | - Bob Bramson
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
| | - Peter Hagoort
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, 6500 HB Nijmegen, The Netherlands
- Max Planck Institute for Psycholinguistics, 6525 XD Nijmegen, The Netherlands
| | - Alexis Hervais-Adelman
- Department of Psychology, Neurolinguistics, University of Zurich, 8050 Zurich, Switzerland
- Neuroscience Center Zurich, University of Zurich and Eidgenössische Technische Hochschule Zurich, 8057 Zurich, Switzerland
| |
Collapse
|
48
|
Tabbal J, Kabbara A, Khalil M, Benquet P, Hassan M. Dynamics of task-related electrophysiological networks: a benchmarking study. Neuroimage 2021; 231:117829. [PMID: 33549758 DOI: 10.1016/j.neuroimage.2021.117829] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 01/25/2021] [Accepted: 01/29/2021] [Indexed: 12/29/2022] Open
Abstract
Motor, sensory and cognitive functions rely on dynamic reshaping of functional brain networks. Tracking these rapid changes is crucial to understand information processing in the brain, but challenging due to the great variety of dimensionality reduction methods used at the network-level and the limited evaluation studies. Using Magnetoencephalography (MEG) combined with Source Separation (SS) methods, we present an integrated framework to track fast dynamics of electrophysiological brain networks. We evaluate nine SS methods applied to three independent MEG databases (N=95) during motor and memory tasks. We report differences between these methods at the group and subject level. We seek to help researchers in choosing objectively the appropriate SS method when tracking fast reconfiguration of functional brain networks, due to its enormous benefits in cognitive and clinical neuroscience.
Collapse
Affiliation(s)
- Judie Tabbal
- Univ Rennes, LTSI - U1099, F-35000 Rennes, France; Azm Center for Research in Biotechnology and Its Applications, EDST, Lebanese University, Beirut, Lebanon.
| | - Aya Kabbara
- Univ Rennes, LTSI - U1099, F-35000 Rennes, France
| | - Mohamad Khalil
- Azm Center for Research in Biotechnology and Its Applications, EDST, Lebanese University, Beirut, Lebanon; CRSI Lab, Engineering Faculty, Lebanese University, Beirut, Lebanon
| | | | | |
Collapse
|
49
|
Neural Correlates of Vocal Auditory Feedback Processing: Unique Insights from Electrocorticography Recordings in a Human Cochlear Implant User. eNeuro 2021; 8:ENEURO.0181-20.2020. [PMID: 33419861 PMCID: PMC7877459 DOI: 10.1523/eneuro.0181-20.2020] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 12/18/2020] [Accepted: 12/21/2020] [Indexed: 11/21/2022] Open
Abstract
There is considerable interest in understanding cortical processing and the function of top-down and bottom-up human neural circuits that control speech production. Research efforts to investigate these circuits are aided by analysis of spectro-temporal response characteristics of neural activity recorded by electrocorticography (ECoG). Further, cortical processing may be altered in the case of hearing-impaired cochlear implant (CI) users, as electric excitation of the auditory nerve creates a markedly different neural code for speech compared with that of the functionally intact hearing system. Studies of cortical activity in CI users typically record scalp potentials and are hampered by stimulus artifact contamination and by spatiotemporal filtering imposed by the skull. We present a unique case of a CI user who required direct recordings from the cortical surface using subdural electrodes implanted for epilepsy assessment. Using experimental conditions where the subject vocalized in the presence (CIs ON) or absence (CIs OFF) of auditory feedback, or listened to playback of self-vocalizations without production, we observed ECoG activity primarily in γ (32–70 Hz) and high γ (70–150 Hz) bands at focal regions on the lateral surface of the superior temporal gyrus (STG). High γ band responses differed in their amplitudes across conditions and cortical sites, possibly reflecting different rates of stimulus presentation and differing levels of neural adaptation. STG γ responses to playback and vocalization with auditory feedback were not different from responses to vocalization without feedback, indicating this activity reflects not only auditory, but also attentional, efference-copy, and sensorimotor processing during speech production.
Collapse
|
50
|
Yusuf PA, Hubka P, Tillein J, Vinck M, Kral A. Deafness Weakens Interareal Couplings in the Auditory Cortex. Front Neurosci 2021; 14:625721. [PMID: 33551733 PMCID: PMC7858676 DOI: 10.3389/fnins.2020.625721] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2020] [Accepted: 12/30/2020] [Indexed: 12/22/2022] Open
Abstract
The function of the cerebral cortex essentially depends on the ability to form functional assemblies across different cortical areas serving different functions. Here we investigated how developmental hearing experience affects functional and effective interareal connectivity in the auditory cortex in an animal model with years-long and complete auditory deprivation (deafness) from birth, the congenitally deaf cat (CDC). Using intracortical multielectrode arrays, neuronal activity of adult hearing controls and CDCs was registered in the primary auditory cortex and the secondary posterior auditory field (PAF). Ongoing activity as well as responses to acoustic stimulation (in adult hearing controls) and electric stimulation applied via cochlear implants (in adult hearing controls and CDCs) were analyzed. As functional connectivity measures pairwise phase consistency and Granger causality were used. While the number of coupled sites was nearly identical between controls and CDCs, a reduced coupling strength between the primary and the higher order field was found in CDCs under auditory stimulation. Such stimulus-related decoupling was particularly pronounced in the alpha band and in top–down direction. Ongoing connectivity did not show such a decoupling. These findings suggest that developmental experience is essential for functional interareal interactions during sensory processing. The outcomes demonstrate that corticocortical couplings, particularly top-down connectivity, are compromised following congenital sensory deprivation.
Collapse
Affiliation(s)
- Prasandhya Astagiri Yusuf
- Department of Medical Physics/Medical Technology Core Cluster IMERI, Faculty of Medicine, University of Indonesia, Jakarta, Indonesia.,Institute of AudioNeuroTechnology, Hannover Medical School, Hanover, Germany.,Department of Experimental Otology of the ENT Clinics, Hannover Medical School, Hanover, Germany
| | - Peter Hubka
- Institute of AudioNeuroTechnology, Hannover Medical School, Hanover, Germany.,Department of Experimental Otology of the ENT Clinics, Hannover Medical School, Hanover, Germany
| | - Jochen Tillein
- Institute of AudioNeuroTechnology, Hannover Medical School, Hanover, Germany.,Department of Experimental Otology of the ENT Clinics, Hannover Medical School, Hanover, Germany.,Department of Otorhinolaryngology, Goethe University, Frankfurt am Main, Germany.,MedEL Company, Innsbruck, Austria
| | - Martin Vinck
- Ernst Strüngmann Institut for Neuroscience in Cooperation with Max Planck Society, Frankfurt, Germany.,Donders Centre for Neuroscience, Radboud University, Department of Neuroinformatics, Nijmegen, Netherlands
| | - Andrej Kral
- Institute of AudioNeuroTechnology, Hannover Medical School, Hanover, Germany.,Department of Experimental Otology of the ENT Clinics, Hannover Medical School, Hanover, Germany.,Department of Biomedical Sciences, School of Medicine and Health Sciences, Macquarie University, Sydney, NSW, Australia
| |
Collapse
|