1
|
Kamada C, Enatsu R, Imataka S, Kanno A, Ochi S, Mikuni N. Functional Brain Mapping Using Depth Electrodes. World Neurosurg 2024; 188:e288-e296. [PMID: 38796150 DOI: 10.1016/j.wneu.2024.05.098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2024] [Revised: 05/16/2024] [Accepted: 05/17/2024] [Indexed: 05/28/2024]
Abstract
OBJECTIVE This study investigated the neurologic symptoms and stimulus intensities in the stimulation of deep structures and subcortical fibers with the depth electrodes. METHODS Seventeen patients with drug-refractory epilepsy who underwent functional brain mapping with the depth electrodes were enrolled. The 50 Hz electrical stimulation was applied, and the diffusion tensor image was used to identify subcortical fibers. The responsible structures and stimulus intensities for the induced neurologic symptoms were evaluated. RESULTS Neurologic symptoms were induced in 11 of 17 patients. The opercular stimulation elicited the neurologic symptoms in 6 patients at the median threshold of 4.0 mA (visceral/face/hand sensory, hand/throat motor, negative motor and auditory symptoms). The insular stimulation induced the neurologic symptoms in 4 patients at the median threshold of 4.0 mA (auditory, negative motor, and sensory symptoms). The stimulation of subcortical fibers was induced in 5 of 9 patients at the median threshold of 4.5 mA. The thresholds of depth electrodes were significantly lower than those of subdural electrodes in 8 patients who used both subdural and depth electrodes and induced symptoms with both electrodes. CONCLUSIONS The stimulation of depth electrodes can identify the function of deep structures and subcortical fibers with lower intensities than subdural electrodes.
Collapse
Affiliation(s)
- Chie Kamada
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan
| | - Rei Enatsu
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan.
| | - Seiichiro Imataka
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan
| | - Aya Kanno
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan
| | - Satoko Ochi
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan
| | - Nobuhiro Mikuni
- Department of Neurosurgery, Sapporo Medical University, Sapporo, Japan
| |
Collapse
|
2
|
Kim H, Kim JS, Chung CK. Visual Mental Imagery and Neural Dynamics of Sensory Substitution in the Blindfolded Subjects. Neuroimage 2024; 295:120621. [PMID: 38797383 DOI: 10.1016/j.neuroimage.2024.120621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/29/2024] Open
Abstract
Although one can recognize the environment by soundscape substituting vision to auditory signal, whether subjects could perceive the soundscape as visual or visual-like sensation has been questioned. In this study, we investigated hierarchical process to elucidate the recruitment mechanism of visual areas by soundscape stimuli in blindfolded subjects. Twenty-two healthy subjects were repeatedly trained to recognize soundscape stimuli converted by visual shape information of letters. An effective connectivity method called dynamic causal modeling (DCM) was employed to reveal how the brain was hierarchically organized to recognize soundscape stimuli. The visual mental imagery model generated cortical source signals of five regions of interest better than auditory bottom-up, cross-modal perception, and mixed models. Spectral couplings between brain areas in the visual mental imagery model were analyzed. While within-frequency coupling is apparent in bottom-up processing where sensory information is transmitted, cross-frequency coupling is prominent in top-down processing, corresponding to the expectation and interpretation of information. Sensory substitution in the brain of blindfolded subjects derived visual mental imagery by combining bottom-up and top-down processing.
Collapse
Affiliation(s)
- HongJune Kim
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea
| | - June Sic Kim
- Clinical Research Institute, Konkuk University Medical Center Seoul, Republic of Korea; Research Institute of Biomedical Science & Technology, Konkuk University, Seoul, Republic of Korea.
| | - Chun Kee Chung
- Dept. of Brain and Cognitive Sciences, Seoul National University, Seoul, Republic of Korea; Interdisciplinary Program in Neuroscience, Seoul National University, Seoul, Republic of Korea; Dept. of Neurosurgery, Seoul National University Hospital, Seoul, Republic of Korea; Neuroscience Research Institute, Seoul National University Medical Research Center, Seoul, Republic of Korea
| |
Collapse
|
3
|
Chang YN, Chang TJ, Lin WF, Kuo CE, Shi YT, Lee HW. Modelling individual differences in reading using an optimised MikeNet simulator: the impact of reading instruction. Front Hum Neurosci 2024; 18:1356483. [PMID: 38974479 PMCID: PMC11224532 DOI: 10.3389/fnhum.2024.1356483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Accepted: 06/03/2024] [Indexed: 07/09/2024] Open
Abstract
Reading is vital for acquiring knowledge and studies have demonstrated that phonology-focused interventions generally yield greater improvements than meaning-focused interventions in English among children with reading disabilities. However, the effectiveness of reading instruction can vary among individuals. Among the various factors that impact reading skills like reading exposure and oral language skills, reading instruction is critical in facilitating children's development into skilled readers; it can significantly influence reading strategies, and contribute to individual differences in reading. To investigate this assumption, we developed a computational model of reading with an optimised MikeNet simulator. In keeping with educational practices, the model underwent training with three different instructional methods: phonology-focused training, meaning-focused training, and phonology-meaning balanced training. We used semantic reliance (SR), a measure of the relative reliance on print-to-sound and print-to-meaning mappings under the different training conditions in the model, as an indicator of individual differences in reading. The simulation results demonstrated a direct link between SR levels and the type of reading instruction. Additionally, the SR scores were able to predict model performance in reading-aloud tasks: higher SR scores were correlated with increased phonological errors and reduced phonological activation. These findings are consistent with data from both behavioral and neuroimaging studies and offer insights into the impact of instructional methods on reading behaviors, while revealing individual differences in reading and the importance of integrating OP and OS instruction approaches for beginning readers.
Collapse
Affiliation(s)
- Ya-Ning Chang
- Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan
| | - Ting-Jung Chang
- Department of Computer Science, National Yang-Ming Chiao-Tung University, Hsinchu, Taiwan
| | - Wei-Fen Lin
- Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan
| | - Ching-En Kuo
- Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan
| | - Yu-Ting Shi
- Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan
| | - Hung-Wei Lee
- Miin Wu School of Computing, National Cheng Kung University, Tainan, Taiwan
| |
Collapse
|
4
|
Bellmann OT, Asano R. Neural correlates of musical timbre: an ALE meta-analysis of neuroimaging data. Front Neurosci 2024; 18:1373232. [PMID: 38952924 PMCID: PMC11215185 DOI: 10.3389/fnins.2024.1373232] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/29/2024] [Indexed: 07/03/2024] Open
Abstract
Timbre is a central aspect of music that allows listeners to identify musical sounds and conveys musical emotion, but also allows for the recognition of actions and is an important structuring property of music. The former functions are known to be implemented in a ventral auditory stream in processing musical timbre. While the latter functions are commonly attributed to areas in a dorsal auditory processing stream in other musical domains, its involvement in musical timbre processing is so far unknown. To investigate if musical timbre processing involves both dorsal and ventral auditory pathways, we carried out an activation likelihood estimation (ALE) meta-analysis of 18 experiments from 17 published neuroimaging studies on musical timbre perception. We identified consistent activations in Brodmann areas (BA) 41, 42, and 22 in the bilateral transverse temporal gyri, the posterior superior temporal gyri and planum temporale, in BA 40 of the bilateral inferior parietal lobe, in BA 13 in the bilateral posterior Insula, and in BA 13 and 22 in the right anterior insula and superior temporal gyrus. The vast majority of the identified regions are associated with the dorsal and ventral auditory processing streams. We therefore propose to frame the processing of musical timbre in a dual-stream model. Moreover, the regions activated in processing timbre show similarities to the brain regions involved in processing several other fundamental aspects of music, indicating possible shared neural bases of musical timbre and other musical domains.
Collapse
Affiliation(s)
| | - Rie Asano
- Systematic Musicology, Institute for Musicology, University of Cologne, Cologne, Germany
| |
Collapse
|
5
|
Roswandowitz C, Kathiresan T, Pellegrino E, Dellwo V, Frühholz S. Cortical-striatal brain network distinguishes deepfake from real speaker identity. Commun Biol 2024; 7:711. [PMID: 38862808 PMCID: PMC11166919 DOI: 10.1038/s42003-024-06372-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Accepted: 05/22/2024] [Indexed: 06/13/2024] Open
Abstract
Deepfakes are viral ingredients of digital environments, and they can trick human cognition into misperceiving the fake as real. Here, we test the neurocognitive sensitivity of 25 participants to accept or reject person identities as recreated in audio deepfakes. We generate high-quality voice identity clones from natural speakers by using advanced deepfake technologies. During an identity matching task, participants show intermediate performance with deepfake voices, indicating levels of deception and resistance to deepfake identity spoofing. On the brain level, univariate and multivariate analyses consistently reveal a central cortico-striatal network that decoded the vocal acoustic pattern and deepfake-level (auditory cortex), as well as natural speaker identities (nucleus accumbens), which are valued for their social relevance. This network is embedded in a broader neural identity and object recognition network. Humans can thus be partly tricked by deepfakes, but the neurocognitive mechanisms identified during deepfake processing open windows for strengthening human resilience to fake information.
Collapse
Affiliation(s)
- Claudia Roswandowitz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland.
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland.
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland.
| | - Thayabaran Kathiresan
- Centre for Neuroscience of Speech, University Melbourne, Melbourne, Australia
- Redenlab, Melbourne, Australia
| | - Elisa Pellegrino
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Volker Dellwo
- Phonetics and Speech Sciences Group, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, Department of Psychology, University of Zurich, Zurich, Switzerland
- Neuroscience Centre Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
- Department of Psychology, University of Oslo, Oslo, Norway
| |
Collapse
|
6
|
Beach SD, Tang DL, Kiran S, Niziolek CA. Pars Opercularis Underlies Efferent Predictions and Successful Auditory Feedback Processing in Speech: Evidence From Left-Hemisphere Stroke. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2024; 5:454-483. [PMID: 38911464 PMCID: PMC11192514 DOI: 10.1162/nol_a_00139] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Accepted: 02/07/2024] [Indexed: 06/25/2024]
Abstract
Hearing one's own speech allows for acoustic self-monitoring in real time. Left-hemisphere motor planning regions are thought to give rise to efferent predictions that can be compared to true feedback in sensory cortices, resulting in neural suppression commensurate with the degree of overlap between predicted and actual sensations. Sensory prediction errors thus serve as a possible mechanism of detection of deviant speech sounds, which can then feed back into corrective action, allowing for online control of speech acoustics. The goal of this study was to assess the integrity of this detection-correction circuit in persons with aphasia (PWA) whose left-hemisphere lesions may limit their ability to control variability in speech output. We recorded magnetoencephalography (MEG) while 15 PWA and age-matched controls spoke monosyllabic words and listened to playback of their utterances. From this, we measured speaking-induced suppression of the M100 neural response and related it to lesion profiles and speech behavior. Both speaking-induced suppression and cortical sensitivity to deviance were preserved at the group level in PWA. PWA with more spared tissue in pars opercularis had greater left-hemisphere neural suppression and greater behavioral correction of acoustically deviant pronunciations, whereas sparing of superior temporal gyrus was not related to neural suppression or acoustic behavior. In turn, PWA who made greater corrections had fewer overt speech errors in the MEG task. Thus, the motor planning regions that generate the efferent prediction are integral to performing corrections when that prediction is violated.
Collapse
Affiliation(s)
| | - Ding-lan Tang
- Waisman Center, The University of Wisconsin–Madison
- Academic Unit of Human Communication, Development, and Information Sciences, University of Hong Kong, Hong Kong, SAR China
| | - Swathi Kiran
- Department of Speech, Language & Hearing Sciences, Boston University
| | - Caroline A. Niziolek
- Waisman Center, The University of Wisconsin–Madison
- Department of Communication Sciences and Disorders, The University of Wisconsin–Madison
| |
Collapse
|
7
|
Rupp KM, Hect JL, Harford EE, Holt LL, Ghuman AS, Abel TJ. A hierarchy of processing complexity and timescales for natural sounds in human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.24.595822. [PMID: 38826304 PMCID: PMC11142240 DOI: 10.1101/2024.05.24.595822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2024]
Abstract
Efficient behavior is supported by humans' ability to rapidly recognize acoustically distinct sounds as members of a common category. Within auditory cortex, there are critical unanswered questions regarding the organization and dynamics of sound categorization. Here, we performed intracerebral recordings in the context of epilepsy surgery as 20 patient-participants listened to natural sounds. We built encoding models to predict neural responses using features of these sounds extracted from different layers within a sound-categorization deep neural network (DNN). This approach yielded highly accurate models of neural responses throughout auditory cortex. The complexity of a cortical site's representation (measured by the depth of the DNN layer that produced the best model) was closely related to its anatomical location, with shallow, middle, and deep layers of the DNN associated with core (primary auditory cortex), lateral belt, and parabelt regions, respectively. Smoothly varying gradients of representational complexity also existed within these regions, with complexity increasing along a posteromedial-to-anterolateral direction in core and lateral belt, and along posterior-to-anterior and dorsal-to-ventral dimensions in parabelt. When we estimated the time window over which each recording site integrates information, we found shorter integration windows in core relative to lateral belt and parabelt. Lastly, we found a relationship between the length of the integration window and the complexity of information processing within core (but not lateral belt or parabelt). These findings suggest hierarchies of timescales and processing complexity, and their interrelationship, represent a functional organizational principle of the auditory stream that underlies our perception of complex, abstract auditory information.
Collapse
Affiliation(s)
- Kyle M. Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Emily E. Harford
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, Texas, United States of America
| | - Avniel Singh Ghuman
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, Pennsylvania, United States of America
| |
Collapse
|
8
|
Clarke S, Da Costa S, Crottaz-Herbette S. Dual Representation of the Auditory Space. Brain Sci 2024; 14:535. [PMID: 38928534 PMCID: PMC11201621 DOI: 10.3390/brainsci14060535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 05/19/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024] Open
Abstract
Auditory spatial cues contribute to two distinct functions, of which one leads to explicit localization of sound sources and the other provides a location-linked representation of sound objects. Behavioral and imaging studies demonstrated right-hemispheric dominance for explicit sound localization. An early clinical case study documented the dissociation between the explicit sound localizations, which was heavily impaired, and fully preserved use of spatial cues for sound object segregation. The latter involves location-linked encoding of sound objects. We review here evidence pertaining to brain regions involved in location-linked representation of sound objects. Auditory evoked potential (AEP) and functional magnetic resonance imaging (fMRI) studies investigated this aspect by comparing encoding of individual sound objects, which changed their locations or remained stationary. Systematic search identified 1 AEP and 12 fMRI studies. Together with studies of anatomical correlates of impaired of spatial-cue-based sound object segregation after focal brain lesions, the present evidence indicates that the location-linked representation of sound objects involves strongly the left hemisphere and to a lesser degree the right hemisphere. Location-linked encoding of sound objects is present in several early-stage auditory areas and in the specialized temporal voice area. In these regions, emotional valence benefits from location-linked encoding as well.
Collapse
Affiliation(s)
- Stephanie Clarke
- Neuropsychology and Neurorehabilitation Service, Centre Hospitalier Universitaire Vaudois (CHUV), University of Lausanne, Av. Pierre-Decker 5, 1011 Lausanne, Switzerland; (S.D.C.); (S.C.-H.)
| | | | | |
Collapse
|
9
|
Yu L, Dugan P, Doyle W, Devinsky O, Friedman D, Flinker A. A left-lateralized dorsolateral prefrontal network for naming. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.15.594403. [PMID: 38798614 PMCID: PMC11118423 DOI: 10.1101/2024.05.15.594403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
The ability to connect the form and meaning of a concept, known as word retrieval, is fundamental to human communication. While various input modalities could lead to identical word retrieval, the exact neural dynamics supporting this convergence relevant to daily auditory discourse remain poorly understood. Here, we leveraged neurosurgical electrocorticographic (ECoG) recordings from 48 patients and dissociated two key language networks that highly overlap in time and space integral to word retrieval. Using unsupervised temporal clustering techniques, we found a semantic processing network located in the middle and inferior frontal gyri. This network was distinct from an articulatory planning network in the inferior frontal and precentral gyri, which was agnostic to input modalities. Functionally, we confirmed that the semantic processing network encodes word surprisal during sentence perception. Our findings characterize how humans integrate ongoing auditory semantic information over time, a critical linguistic function from passive comprehension to daily discourse.
Collapse
Affiliation(s)
- Leyao Yu
- Department of Biomedical Engineering, New York University, New York, 10016, New York, the United States
- Department of Neurology, School of Medicine, New York University, New York, 10016, New York, the United States
| | - Patricia Dugan
- Department of Neurology, School of Medicine, New York University, New York, 10016, New York, the United States
| | - Werner Doyle
- Department of Neurosurgery, School of Medicine, New York University, New York, 10016, New York, the United States
| | - Orrin Devinsky
- Department of Neurology, School of Medicine, New York University, New York, 10016, New York, the United States
| | - Daniel Friedman
- Department of Neurology, School of Medicine, New York University, New York, 10016, New York, the United States
| | - Adeen Flinker
- Department of Biomedical Engineering, New York University, New York, 10016, New York, the United States
- Department of Neurology, School of Medicine, New York University, New York, 10016, New York, the United States
| |
Collapse
|
10
|
van der Heijden K, Patel P, Bickel S, Herrero JL, Mehta AD, Mesgarani N. Joint population coding and temporal coherence link an attended talker's voice and location features in naturalistic multi-talker scenes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.05.13.593814. [PMID: 38798551 PMCID: PMC11118436 DOI: 10.1101/2024.05.13.593814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2024]
Abstract
Listeners readily extract multi-dimensional auditory objects such as a 'localized talker' from complex acoustic scenes with multiple talkers. Yet, the neural mechanisms underlying simultaneous encoding and linking of different sound features - for example, a talker's voice and location - are poorly understood. We analyzed invasive intracranial recordings in neurosurgical patients attending to a localized talker in real-life cocktail party scenarios. We found that sensitivity to an individual talker's voice and location features was distributed throughout auditory cortex and that neural sites exhibited a gradient from sensitivity to a single feature to joint sensitivity to both features. On a population level, cortical response patterns of both dual-feature sensitive sites but also single-feature sensitive sites revealed simultaneous encoding of an attended talker's voice and location features. However, for single-feature sensitive sites, the representation of the primary feature was more precise. Further, sites which selective tracked an attended speech stream concurrently encoded an attended talker's voice and location features, indicating that such sites combine selective tracking of an attended auditory object with encoding of the object's features. Finally, we found that attending a localized talker selectively enhanced temporal coherence between single-feature voice sensitive sites and single-feature location sensitive sites, providing an additional mechanism for linking voice and location in multi-talker scenes. These results demonstrate that a talker's voice and location features are linked during multi-dimensional object formation in naturalistic multi-talker scenes by joint population coding as well as by temporal coherence between neural sites. SIGNIFICANCE STATEMENT Listeners effortlessly extract auditory objects from complex acoustic scenes consisting of multiple sound sources in naturalistic, spatial sound scenes. Yet, how the brain links different sound features to form a multi-dimensional auditory object is poorly understood. We investigated how neural responses encode and integrate an attended talker's voice and location features in spatial multi-talker sound scenes to elucidate which neural mechanisms underlie simultaneous encoding and linking of different auditory features. Our results show that joint population coding as well as temporal coherence mechanisms contribute to distributed multi-dimensional auditory object encoding. These findings shed new light on cortical functional specialization and multidimensional auditory object formation in complex, naturalistic listening scenes. HIGHLIGHTS Cortical responses to an single talker exhibit a distributed gradient, ranging from sites that are sensitive to both a talker's voice and location (dual-feature sensitive sites) to sites that are sensitive to either voice or location (single-feature sensitive sites).Population response patterns of dual-feature sensitive sites encode voice and location features of the attended talker in multi-talker scenes jointly and with equal precision.Despite their sensitivity to a single feature at the level of individual cortical sites, population response patterns of single-feature sensitive sites also encode location and voice features of a talker jointly, but with higher precision for the feature they are primarily sensitive to.Neural sites which selectively track an attended speech stream concurrently encode the attended talker's voice and location features.Attention selectively enhances temporal coherence between voice and location selective sites over time.Joint population coding as well as temporal coherence mechanisms underlie distributed multi-dimensional auditory object encoding in auditory cortex.
Collapse
|
11
|
Baciu M, Roger E. Finding the Words: How Does the Aging Brain Process Language? A Focused Review of Brain Connectivity and Compensatory Pathways. Top Cogn Sci 2024. [PMID: 38734967 DOI: 10.1111/tops.12736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2023] [Revised: 04/30/2024] [Accepted: 05/02/2024] [Indexed: 05/13/2024]
Abstract
As people age, there is a natural decline in cognitive functioning and brain structure. However, the relationship between brain function and cognition in older adults is neither straightforward nor uniform. Instead, it is complex, influenced by multiple factors, and can vary considerably from one person to another. Reserve, compensation, and maintenance mechanisms may help explain why some older adults can maintain high levels of performance while others struggle. These mechanisms are often studied concerning memory and executive functions that are particularly sensitive to the effects of aging. However, language abilities can also be affected by age, with changes in production fluency. The impact of brain changes on language abilities needs to be further investigated to understand the dynamics and patterns of aging, especially successful aging. We previously modeled several compensatory profiles of language production and lexical access/retrieval in aging within the Lexical Access and Retrieval in Aging (LARA) model. In the present paper, we propose an extended version of the LARA model, called LARA-Connectivity (LARA-C), incorporating recent evidence on brain connectivity. Finally, we discuss factors that may influence the strategies implemented with aging. The LARA-C model can serve as a framework to understand individual performance and open avenues for possible personalized interventions.
Collapse
Affiliation(s)
- Monica Baciu
- LPNC, Psychology Department, Grenoble Alps University
- Neurology Department, Grenoble Alps University Hospital
| | - Elise Roger
- LPNC, Psychology Department, Grenoble Alps University
- Communication and Aging Laboratory, Research Center of the University Institute of Geriatrics of Montreal
- Faculty of Medicine, University of Montreal
| |
Collapse
|
12
|
Gelens F, Äijälä J, Roberts L, Komatsu M, Uran C, Jensen MA, Miller KJ, Ince RAA, Garagnani M, Vinck M, Canales-Johnson A. Distributed representations of prediction error signals across the cortical hierarchy are synergistic. Nat Commun 2024; 15:3941. [PMID: 38729937 PMCID: PMC11087548 DOI: 10.1038/s41467-024-48329-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 04/26/2024] [Indexed: 05/12/2024] Open
Abstract
A relevant question concerning inter-areal communication in the cortex is whether these interactions are synergistic. Synergy refers to the complementary effect of multiple brain signals conveying more information than the sum of each isolated signal. Redundancy, on the other hand, refers to the common information shared between brain signals. Here, we dissociated cortical interactions encoding complementary information (synergy) from those sharing common information (redundancy) during prediction error (PE) processing. We analyzed auditory and frontal electrocorticography (ECoG) signals in five common awake marmosets performing two distinct auditory oddball tasks and investigated to what extent event-related potentials (ERP) and broadband (BB) dynamics encoded synergistic and redundant information about PE processing. The information conveyed by ERPs and BB signals was synergistic even at lower stages of the hierarchy in the auditory cortex and between auditory and frontal regions. Using a brain-constrained neural network, we simulated the synergy and redundancy observed in the experimental results and demonstrated that the emergence of synergy between auditory and frontal regions requires the presence of strong, long-distance, feedback, and feedforward connections. These results indicate that distributed representations of PE signals across the cortical hierarchy can be highly synergistic.
Collapse
Affiliation(s)
- Frank Gelens
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht 129-B, 1018 WT, Amsterdam, The Netherlands
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Juho Äijälä
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
| | - Louis Roberts
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
| | - Misako Komatsu
- Laboratory for Haptic Perception and Cognitive Physiology, RIKEN Brain Science Institute, Saitama, 351-0198, Japan
| | - Cem Uran
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands
| | - Michael A Jensen
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Kai J Miller
- Department of Neurosurgery, Mayo Clinic, Rochester, MN, 55905, USA
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, G12 8QB, Scotland, UK
| | - Max Garagnani
- Department of Computing, Goldsmiths, University of London, SE14 6NW, London, UK
- Brain Language Lab, Freie Universität Berlin, 14195, Berlin, Germany
| | - Martin Vinck
- Ernst Strüngmann Institute (ESI) for Neuroscience in Cooperation with Max Planck Society, 60528, Frankfurt am Main, Germany.
- Donders Centre for Neuroscience, Department of Neuroinformatics, Radboud University Nijmegen, 6525, Nijmegen, The Netherlands.
| | - Andres Canales-Johnson
- Department of Psychology, University of Cambridge, CB2 3EB, Cambridge, UK.
- Neuropsychology and Cognitive Neurosciences Research Center, Faculty of Health Sciences, Universidad Católica del Maule, 3460000, Talca, Chile.
| |
Collapse
|
13
|
Ullman MT, Clark GM, Pullman MY, Lovelett JT, Pierpont EI, Jiang X, Turkeltaub PE. The neuroanatomy of developmental language disorder: a systematic review and meta-analysis. Nat Hum Behav 2024; 8:962-975. [PMID: 38491094 DOI: 10.1038/s41562-024-01843-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Accepted: 02/01/2024] [Indexed: 03/18/2024]
Abstract
Developmental language disorder (DLD) is a common neurodevelopmental disorder with adverse impacts that continue into adulthood. However, its neural bases remain unclear. Here we address this gap by systematically identifying and quantitatively synthesizing neuroanatomical studies of DLD using co-localization likelihood estimation, a recently developed neuroanatomical meta-analytic technique. Analyses of structural brain data (22 peer-reviewed papers, 577 participants) revealed highly consistent anomalies only in the basal ganglia (100% of participant groups in which this structure was examined, weighted by group sample sizes; 99.8% permutation-based likelihood the anomaly clustering was not due to chance). These anomalies were localized specifically to the anterior neostriatum (again 100% weighted proportion and 99.8% likelihood). As expected given the task dependence of activation, functional neuroimaging data (11 peer-reviewed papers, 414 participants) yielded less consistency, though anomalies again occurred primarily in the basal ganglia (79.0% and 95.1%). Multiple sensitivity analyses indicated that the patterns were robust. The meta-analyses elucidate the neuroanatomical signature of DLD, and implicate the basal ganglia in particular. The findings support the procedural circuit deficit hypothesis of DLD, have basic research and translational implications for the disorder, and advance our understanding of the neuroanatomy of language.
Collapse
Affiliation(s)
- Michael T Ullman
- Brain and Language Laboratory, Department of Neuroscience, Georgetown University, Washington DC, USA.
| | - Gillian M Clark
- Cognitive Neuroscience Unit, School of Psychology, Deakin University, Geelong, Victoria, Australia
| | - Mariel Y Pullman
- Brain and Language Laboratory, Department of Neuroscience, Georgetown University, Washington DC, USA
- Mount Sinai Beth Israel, New York, NY, USA
| | - Jarrett T Lovelett
- Brain and Language Laboratory, Department of Neuroscience, Georgetown University, Washington DC, USA
- Department of Psychology, University of California, San Diego, La Jolla, CA, USA
| | - Elizabeth I Pierpont
- Department of Pediatrics, University of Minnesota Medical Center, Minneapolis, MN, USA
| | - Xiong Jiang
- Department of Neuroscience, Georgetown University, Washington DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University, Washington DC, USA
- Research Division, MedStar National Rehabilitation Network, Washington DC, USA
| |
Collapse
|
14
|
Tomasello R, Carriere M, Pulvermüller F. The impact of early and late blindness on language and verbal working memory: A brain-constrained neural model. Neuropsychologia 2024; 196:108816. [PMID: 38331022 DOI: 10.1016/j.neuropsychologia.2024.108816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Revised: 01/26/2024] [Accepted: 02/04/2024] [Indexed: 02/10/2024]
Abstract
Neural circuits related to language exhibit a remarkable ability to reorganize and adapt in response to visual deprivation. Particularly, early and late blindness induce distinct neuroplastic changes in the visual cortex, repurposing it for language and semantic processing. Interestingly, these functional changes provoke a unique cognitive advantage - enhanced verbal working memory, particularly in early blindness. Yet, the underlying neuromechanisms and the impact on language and memory-related circuits remain not fully understood. Here, we applied a brain-constrained neural network mimicking the structural and functional features of the frontotemporal-occipital cortices, to model conceptual acquisition in early and late blindness. The results revealed differential expansion of conceptual-related neural circuits into deprived visual areas depending on the timing of visual loss, which is most prominent in early blindness. This neural recruitment is fundamentally governed by the biological principles of neural circuit expansion and the absence of uncorrelated sensory input. Critically, the degree of these changes is constrained by the availability of neural matter previously allocated to visual experiences, as in the case of late blindness. Moreover, we shed light on the implication of visual deprivation on the neural underpinnings of verbal working memory, revealing longer reverberatory neural activity in 'blind models' as compared to the sighted ones. These findings provide a better understanding of the interplay between visual deprivations, neuroplasticity, language processing and verbal working memory.
Collapse
Affiliation(s)
- Rosario Tomasello
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, 14195, Berlin, Germany; Cluster of Excellence' Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, 10099, Berlin, Germany.
| | - Maxime Carriere
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, 14195, Berlin, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, WE4 Freie Universität Berlin, 14195, Berlin, Germany; Cluster of Excellence' Matters of Activity. Image Space Material', Humboldt Universität zu Berlin, 10099, Berlin, Germany; Berlin School of Mind and Brain, Humboldt Universität zu Berlin, 10117, Berlin, Germany; Einstein Center for Neurosciences, 10117, Berlin, Germany
| |
Collapse
|
15
|
Viswanathan V, Rupp KM, Hect JL, Harford EE, Holt LL, Abel TJ. Intracranial Mapping of Response Latencies and Task Effects for Spoken Syllable Processing in the Human Brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.04.05.588349. [PMID: 38617227 PMCID: PMC11014624 DOI: 10.1101/2024.04.05.588349] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
Prior lesion, noninvasive-imaging, and intracranial-electroencephalography (iEEG) studies have documented hierarchical, parallel, and distributed characteristics of human speech processing. Yet, there have not been direct, intracranial observations of the latency with which regions outside the temporal lobe respond to speech, or how these responses are impacted by task demands. We leveraged human intracranial recordings via stereo-EEG to measure responses from diverse forebrain sites during (i) passive listening to /bi/ and /pi/ syllables, and (ii) active listening requiring /bi/-versus-/pi/ categorization. We find that neural response latency increases from a few tens of ms in Heschl's gyrus (HG) to several tens of ms in superior temporal gyrus (STG), superior temporal sulcus (STS), and early parietal areas, and hundreds of ms in later parietal areas, insula, frontal cortex, hippocampus, and amygdala. These data also suggest parallel flow of speech information dorsally and ventrally, from HG to parietal areas and from HG to STG and STS, respectively. Latency data also reveal areas in parietal cortex, frontal cortex, hippocampus, and amygdala that are not responsive to the stimuli during passive listening but are responsive during categorization. Furthermore, multiple regions-spanning auditory, parietal, frontal, and insular cortices, and hippocampus and amygdala-show greater neural response amplitudes during active versus passive listening (a task-related effect). Overall, these results are consistent with hierarchical processing of speech at a macro level and parallel streams of information flow in temporal and parietal regions. These data also reveal regions where the speech code is stimulus-faithful and those that encode task-relevant representations.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA 15213
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Kyle M. Rupp
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Jasmine L. Hect
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Emily E. Harford
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
| | - Lori L. Holt
- Department of Psychology, The University of Texas at Austin, Austin, TX 78712
| | - Taylor J. Abel
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA 15260
- Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA 15238
| |
Collapse
|
16
|
Whittaker HT, Khayyat L, Fortier-Lavallée J, Laverdière M, Bélanger C, Zatorre RJ, Albouy P. Information-based rhythmic transcranial magnetic stimulation to accelerate learning during auditory working memory training: a proof-of-concept study. Front Neurosci 2024; 18:1355565. [PMID: 38638697 PMCID: PMC11024337 DOI: 10.3389/fnins.2024.1355565] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Accepted: 03/14/2024] [Indexed: 04/20/2024] Open
Abstract
Introduction Rhythmic transcranial magnetic stimulation (rhTMS) has been shown to enhance auditory working memory manipulation, specifically by boosting theta oscillatory power in the dorsal auditory pathway during task performance. It remains unclear whether these enhancements (i) persist beyond the period of stimulation, (ii) if they can accelerate learning and (iii) if they would accumulate over several days of stimulation. In the present study, we investigated the lasting behavioral and electrophysiological effects of applying rhTMS over the left intraparietal sulcus (IPS) throughout the course of seven sessions of cognitive training on an auditory working memory task. Methods A limited sample of 14 neurologically healthy participants took part in the training protocol with an auditory working memory task while being stimulated with either theta (5 Hz) rhTMS or sham TMS. Electroencephalography (EEG) was recorded before, throughout five training sessions and after the end of training to assess to effects of rhTMS on behavioral performance and on oscillatory entrainment of the dorsal auditory network. Results We show that this combined approach enhances theta oscillatory activity within the fronto-parietal network and causes improvements in auditoryworking memory performance. We show that compared to individuals who received sham stimulation, cognitive training can be accelerated when combined with optimized rhTMS, and that task performance benefits can outlast the training period by ∼ 3 days. Furthermore, we show that there is increased theta oscillatory power within the recruited dorsal auditory network during training, and that sustained EEG changes can be observed ∼ 3 days following stimulation. Discussion The present study, while underpowered for definitive statistical analyses, serves to improve our understanding of the causal dynamic interactions supporting auditory working memory. Our results constitute an important proof of concept for the potential translational impact of non-invasive brain stimulation protocols and provide preliminary data for developing optimized rhTMS and training protocols that could be implemented in clinical populations.
Collapse
Affiliation(s)
- Heather T. Whittaker
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
| | - Lina Khayyat
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
| | | | - Megan Laverdière
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| | - Carole Bélanger
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| | - Robert J. Zatorre
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
| | - Philippe Albouy
- Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montréal, QC, Canada
- International Laboratory for Brain, Music and Sound Research (BRAMS) - Centre for Research on Brain Language and Music (CRBLM), Montreal, QC, Canada
- CERVO Brain Research Centre, School of Psychology, Université Laval, Québec City, QC, Canada
| |
Collapse
|
17
|
Kim SG, De Martino F, Overath T. Linguistic modulation of the neural encoding of phonemes. Cereb Cortex 2024; 34:bhae155. [PMID: 38687241 PMCID: PMC11059272 DOI: 10.1093/cercor/bhae155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 05/02/2024] Open
Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, Netherlands
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Duke Institute for Brain Sciences, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
| |
Collapse
|
18
|
McMullin MA, Kumar R, Higgins NC, Gygi B, Elhilali M, Snyder JS. Preliminary Evidence for Global Properties in Human Listeners During Natural Auditory Scene Perception. Open Mind (Camb) 2024; 8:333-365. [PMID: 38571530 PMCID: PMC10990578 DOI: 10.1162/opmi_a_00131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Accepted: 02/10/2024] [Indexed: 04/05/2024] Open
Abstract
Theories of auditory and visual scene analysis suggest the perception of scenes relies on the identification and segregation of objects within it, resembling a detail-oriented processing style. However, a more global process may occur while analyzing scenes, which has been evidenced in the visual domain. It is our understanding that a similar line of research has not been explored in the auditory domain; therefore, we evaluated the contributions of high-level global and low-level acoustic information to auditory scene perception. An additional aim was to increase the field's ecological validity by using and making available a new collection of high-quality auditory scenes. Participants rated scenes on 8 global properties (e.g., open vs. enclosed) and an acoustic analysis evaluated which low-level features predicted the ratings. We submitted the acoustic measures and average ratings of the global properties to separate exploratory factor analyses (EFAs). The EFA of the acoustic measures revealed a seven-factor structure explaining 57% of the variance in the data, while the EFA of the global property measures revealed a two-factor structure explaining 64% of the variance in the data. Regression analyses revealed each global property was predicted by at least one acoustic variable (R2 = 0.33-0.87). These findings were extended using deep neural network models where we examined correlations between human ratings of global properties and deep embeddings of two computational models: an object-based model and a scene-based model. The results support that participants' ratings are more strongly explained by a global analysis of the scene setting, though the relationship between scene perception and auditory perception is multifaceted, with differing correlation patterns evident between the two models. Taken together, our results provide evidence for the ability to perceive auditory scenes from a global perspective. Some of the acoustic measures predicted ratings of global scene perception, suggesting representations of auditory objects may be transformed through many stages of processing in the ventral auditory stream, similar to what has been proposed in the ventral visual stream. These findings and the open availability of our scene collection will make future studies on perception, attention, and memory for natural auditory scenes possible.
Collapse
Affiliation(s)
| | - Rohit Kumar
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Nathan C. Higgins
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| | - Brian Gygi
- East Bay Institute for Research and Education, Martinez, CA, USA
| | - Mounya Elhilali
- Department of Electrical and Computer Engineering, Johns Hopkins University, Baltimore, MD, USA
| | - Joel S. Snyder
- Department of Psychology, University of Nevada, Las Vegas, Las Vegas, NV, USA
| |
Collapse
|
19
|
Hullett PW, Leonard MK, Gorno-Tempini ML, Mandelli ML, Chang EF. Parallel Encoding of Speech in Human Frontal and Temporal Lobes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.19.585648. [PMID: 38562883 PMCID: PMC10983886 DOI: 10.1101/2024.03.19.585648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Models of speech perception are centered around a hierarchy in which auditory representations in the thalamus propagate to primary auditory cortex, then to the lateral temporal cortex, and finally through dorsal and ventral pathways to sites in the frontal lobe. However, evidence for short latency speech responses and low-level spectrotemporal representations in frontal cortex raises the question of whether speech-evoked activity in frontal cortex strictly reflects downstream processing from lateral temporal cortex or whether there are direct parallel pathways from the thalamus or primary auditory cortex to the frontal lobe that supplement the traditional hierarchical architecture. Here, we used high-density direct cortical recordings, high-resolution diffusion tractography, and hemodynamic functional connectivity to evaluate for evidence of direct parallel inputs to frontal cortex from low-level areas. We found that neural populations in the frontal lobe show speech-evoked responses that are synchronous or occur earlier than responses in the lateral temporal cortex. These short latency frontal lobe neural populations encode spectrotemporal speech content indistinguishable from spectrotemporal encoding patterns observed in the lateral temporal lobe, suggesting parallel auditory speech representations reaching temporal and frontal cortex simultaneously. This is further supported by white matter tractography and functional connectivity patterns that connect the auditory nucleus of the thalamus (medial geniculate body) and the primary auditory cortex to the frontal lobe. Together, these results support the existence of a robust pathway of parallel inputs from low-level auditory areas to frontal lobe targets and illustrate long-range parallel architecture that works alongside the classical hierarchical speech network model.
Collapse
|
20
|
Nourski KV, Steinschneider M, Rhone AE, Dappen ER, Kawasaki H, Howard MA. Processing of auditory novelty in human cortex during a semantic categorization task. Hear Res 2024; 444:108972. [PMID: 38359485 PMCID: PMC10984345 DOI: 10.1016/j.heares.2024.108972] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 02/05/2024] [Accepted: 02/10/2024] [Indexed: 02/17/2024]
Abstract
Auditory semantic novelty - a new meaningful sound in the context of a predictable acoustical environment - can probe neural circuits involved in language processing. Aberrant novelty detection is a feature of many neuropsychiatric disorders. This large-scale human intracranial electrophysiology study examined the spatial distribution of gamma and alpha power and auditory evoked potentials (AEP) associated with responses to unexpected words during performance of semantic categorization tasks. Participants were neurosurgical patients undergoing monitoring for medically intractable epilepsy. Each task included repeatedly presented monosyllabic words from different talkers ("common") and ten words presented only once ("novel"). Targets were words belonging to a specific semantic category. Novelty effects were defined as differences between neural responses to novel and common words. Novelty increased task difficulty and was associated with augmented gamma, suppressed alpha power, and AEP differences broadly distributed across the cortex. Gamma novelty effect had the highest prevalence in planum temporale, posterior superior temporal gyrus (STG) and pars triangularis of the inferior frontal gyrus; alpha in anterolateral Heschl's gyrus (HG), anterior STG and middle anterior cingulate cortex; AEP in posteromedial HG, lower bank of the superior temporal sulcus, and planum polare. Gamma novelty effect had a higher prevalence in dorsal than ventral auditory-related areas. Novelty effects were more pronounced in the left hemisphere. Better novel target detection was associated with reduced gamma novelty effect within auditory cortex and enhanced gamma effect within prefrontal and sensorimotor cortex. Alpha and AEP novelty effects were generally more prevalent in better performing participants. Multiple areas, including auditory cortex on the superior temporal plane, featured AEP novelty effect within the time frame of P3a and N400 scalp-recorded novelty-related potentials. This work provides a detailed account of auditory novelty in a paradigm that directly examined brain regions associated with semantic processing. Future studies may aid in the development of objective measures to assess the integrity of semantic novelty processing in clinical populations.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States.
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Departments of Neurology, Neuroscience, and Pediatrics, Albert Einstein College of Medicine, Bronx, NY 10461, United States
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Emily R Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, United States; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA 52242, United States; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA 52242, United States
| |
Collapse
|
21
|
Wikman P, Salmela V, Sjöblom E, Leminen M, Laine M, Alho K. Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network. PLoS Biol 2024; 22:e3002534. [PMID: 38466713 PMCID: PMC10957087 DOI: 10.1371/journal.pbio.3002534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/21/2024] [Accepted: 01/30/2024] [Indexed: 03/13/2024] Open
Abstract
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Eetu Sjöblom
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- AI and Analytics Unit, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
22
|
Shi Y, Li Y. The effective connectivity analysis of fMRI based on asymmetric detection of transfer brain entropy. Cereb Cortex 2024; 34:bhae070. [PMID: 38466114 DOI: 10.1093/cercor/bhae070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Revised: 02/05/2024] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
It is important to explore causal relationships in functional magnetic resonance imaging study. However, the traditional effective connectivity analysis method is easy to produce false causality, and the detection accuracy needs to be improved. In this paper, we introduce a novel functional magnetic resonance imaging effective connectivity method based on the asymmetry detection of transfer entropy, which quantifies the disparity in predictive information between forward and backward time, subsequently normalizing this disparity to establish a more precise criterion for detecting causal relationships while concurrently reducing computational complexity. Then, we evaluate the effectiveness of this method on the simulated data with different level of nonlinearity, and the results demonstrated that the proposed method outperforms others methods on the detection of both linear and nonlinear causal relationships, including Granger Causality, Partial Granger Causality, Kernel Granger Causality, Copula Granger Causality, and traditional transfer entropy. Furthermore, we applied it to study the effective connectivity of brain functional activities in seafarers. The results showed that there are significantly different causal relationships between different brain regions in seafarers compared with non-seafarers, such as Temporal lobe related to sound and auditory information processing, Hippocampus related to spatial navigation, Precuneus related to emotion processing as well as Supp_Motor_Area associated with motor control and coordination, which reflects the occupational specificity of brain function of seafarers.
Collapse
Affiliation(s)
- Yuhu Shi
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| | - Yidan Li
- College of Information Engineering, Shanghai Maritime University, Shanghai 201306, China
| |
Collapse
|
23
|
Tolkacheva V, Brownsett SLE, McMahon KL, de Zubicaray GI. Perceiving and misperceiving speech: lexical and sublexical processing in the superior temporal lobes. Cereb Cortex 2024; 34:bhae087. [PMID: 38494418 PMCID: PMC10944697 DOI: 10.1093/cercor/bhae087] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Revised: 02/15/2024] [Accepted: 02/16/2024] [Indexed: 03/19/2024] Open
Abstract
Listeners can use prior knowledge to predict the content of noisy speech signals, enhancing perception. However, this process can also elicit misperceptions. For the first time, we employed a prime-probe paradigm and transcranial magnetic stimulation to investigate causal roles for the left and right posterior superior temporal gyri (pSTG) in the perception and misperception of degraded speech. Listeners were presented with spectrotemporally degraded probe sentences preceded by a clear prime. To produce misperceptions, we created partially mismatched pseudo-sentence probes via homophonic nonword transformations (e.g. The little girl was excited to lose her first tooth-Tha fittle girmn wam expited du roos har derst cooth). Compared to a control site (vertex), inhibitory stimulation of the left pSTG selectively disrupted priming of real but not pseudo-sentences. Conversely, inhibitory stimulation of the right pSTG enhanced priming of misperceptions with pseudo-sentences, but did not influence perception of real sentences. These results indicate qualitatively different causal roles for the left and right pSTG in perceiving degraded speech, supporting bilateral models that propose engagement of the right pSTG in sublexical processing.
Collapse
Affiliation(s)
- Valeriya Tolkacheva
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| | - Sonia L E Brownsett
- Queensland Aphasia Research Centre, School of Health and Rehabilitation Sciences, University of Queensland, Surgical Treatment and Rehabilitation Services, Herston, Queensland, 4006, Australia
- Centre of Research Excellence in Aphasia Recovery and Rehabilitation, La Trobe University, Melbourne, Health Sciences Building 1, 1 Kingsbury Drive, Bundoora, Victoria, 3086, Australia
| | - Katie L McMahon
- Herston Imaging Research Facility, Royal Brisbane & Women’s Hospital, Building 71/918, Royal Brisbane & Women’s Hospital, Herston, Queensland, 4006, Australia
- Queensland University of Technology, School of Clinical Sciences and Centre for Biomedical Technologies, 60 Musk Avenue, Kelvin Grove, Queensland, 4059, Australia
| | - Greig I de Zubicaray
- Queensland University of Technology, School of Psychology and Counselling, O Block, Kelvin Grove, Queensland, 4059, Australia
| |
Collapse
|
24
|
Nguyen PTU, Henningsen-Schomers MR, Pulvermüller F. Causal Influence of Linguistic Learning on Perceptual and Conceptual Processing: A Brain-Constrained Deep Neural Network Study of Proper Names and Category Terms. J Neurosci 2024; 44:e1048232023. [PMID: 38253531 PMCID: PMC10904026 DOI: 10.1523/jneurosci.1048-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 12/01/2023] [Accepted: 12/06/2023] [Indexed: 01/24/2024] Open
Abstract
Language influences cognitive and conceptual processing, but the mechanisms through which such causal effects are realized in the human brain remain unknown. Here, we use a brain-constrained deep neural network model of category formation and symbol learning and analyze the emergent model's internal mechanisms at the neural circuit level. In one set of simulations, the network was presented with similar patterns of neural activity indexing instances of objects and actions belonging to the same categories. Biologically realistic Hebbian learning led to the formation of instance-specific neurons distributed across multiple areas of the network, and, in addition, to cell assembly circuits of "shared" neurons responding to all category instances-the network correlates of conceptual categories. In two separate sets of simulations, the network learned the same patterns together with symbols for individual instances ["proper names" (PN)] or symbols related to classes of instances sharing common features ["category terms" (CT)]. Learning CT remarkably increased the number of shared neurons in the network, thereby making category representations more robust while reducing the number of neurons of instance-specific ones. In contrast, proper name learning prevented a substantial reduction of instance-specific neurons and blocked the overgrowth of category general cells. Representational similarity analysis further confirmed that the neural activity patterns of category instances became more similar to each other after category-term learning, relative to both learning with PN and without any symbols. These network-based mechanisms for concepts, PN, and CT explain why and how symbol learning changes object perception and memory, as revealed by experimental studies.
Collapse
Affiliation(s)
- Phuc T U Nguyen
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin 14195, Germany
| | - Malte R Henningsen-Schomers
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin 14195, Germany
- Cluster of Excellence "Matters of Activity Image Space Material", Humboldt-Universität zu Berlin, Berlin 10099, Germany
| | - Friedemann Pulvermüller
- Brain Language Laboratory, Department of Philosophy and Humanities, Freie Universität Berlin, Berlin 14195, Germany
- Cluster of Excellence "Matters of Activity Image Space Material", Humboldt-Universität zu Berlin, Berlin 10099, Germany
- Berlin School of Mind and Brain, Berlin 10099, Germany
- Einstein Center for Neurosciences, Berlin D-10117, Germany
| |
Collapse
|
25
|
Zhang Y, Shen SX, Bibic A, Wang X. Evolutionary continuity and divergence of auditory dorsal and ventral pathways in primates revealed by ultra-high field diffusion MRI. Proc Natl Acad Sci U S A 2024; 121:e2313831121. [PMID: 38377216 PMCID: PMC10907247 DOI: 10.1073/pnas.2313831121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Accepted: 01/22/2024] [Indexed: 02/22/2024] Open
Abstract
Auditory dorsal and ventral pathways in the human brain play important roles in supporting speech and language processing. However, the evolutionary root of the dual auditory pathways in the primate brain is unclear. By parcellating the auditory cortex of marmosets (a New World monkey species), macaques (an Old World monkey species), and humans using the same individual-based analysis method and tracking the pathways from the auditory cortex based on multi-shell diffusion-weighted MRI (dMRI), homologous auditory dorsal and ventral fiber tracks were identified in these primate species. The ventral pathway was found to be well conserved in all three primate species analyzed but extend to more anterior temporal regions in humans. In contrast, the dorsal pathway showed a divergence between monkey and human brains. First, frontal regions in the human brain have stronger connections to the higher-level auditory regions than to the lower-level auditory regions along the dorsal pathway, while frontal regions in the monkey brain show opposite connection patterns along the dorsal pathway. Second, the left lateralization of the dorsal pathway is only found in humans. Moreover, the connectivity strength of the dorsal pathway in marmosets is more similar to that of humans than macaques. These results demonstrate the continuity and divergence of the dual auditory pathways in the primate brains along the evolutionary path, suggesting that the putative neural networks supporting human speech and language processing might have emerged early in primate evolution.
Collapse
Affiliation(s)
- Yang Zhang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Sherry Xinyi Shen
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| | - Adnan Bibic
- Department of Radiology, Johns Hopkins University School of Medicine, Baltimore, MD21205
- Kirby Research Center for Functional Brain Imaging, Kennedy Krieger Institute, F. M. Kirby Center, Baltimore, MD21205
| | - Xiaoqin Wang
- Laboratory of Auditory Neurophysiology, Department of Biomedical Engineering, Johns Hopkins University School of Medicine, Baltimore, MD21205
| |
Collapse
|
26
|
Fiorin G, Delfitto D. Syncopation as structure bootstrapping: the role of asymmetry in rhythm and language. Front Psychol 2024; 15:1304485. [PMID: 38440243 PMCID: PMC10911290 DOI: 10.3389/fpsyg.2024.1304485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Accepted: 01/22/2024] [Indexed: 03/06/2024] Open
Abstract
Syncopation - the occurrence of a musical event on a metrically weak position preceding a rest on a metrically strong position - represents an important challenge in the study of the mapping between rhythm and meter. In this contribution, we present the hypothesis that syncopation is an effective strategy to elicit the bootstrapping of a multi-layered, hierarchically organized metric structure from a linear rhythmic surface. The hypothesis is inspired by a parallel with the problem of linearization in natural language syntax, which is the problem of how hierarchically organized phrase-structure markers are mapped onto linear sequences of words. The hypothesis has important consequences for the role of meter in music perception and cognition and, more particularly, for its role in the relationship between rhythm and bodily entrainment.
Collapse
Affiliation(s)
- Gaetano Fiorin
- Department of Humanities, University of Trieste, Trieste, Italy
| | - Denis Delfitto
- Department of Cultures and Civilizations, University of Verona, Verona, Italy
| |
Collapse
|
27
|
Reybrouck M, Schiavio A. Music performance as knowledge acquisition: a review and preliminary conceptual framework. Front Psychol 2024; 15:1331806. [PMID: 38390412 PMCID: PMC10883160 DOI: 10.3389/fpsyg.2024.1331806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2023] [Accepted: 01/15/2024] [Indexed: 02/24/2024] Open
Abstract
To what extent does playing a musical instrument contribute to an individual's construction of knowledge? This paper aims to address this question by examining music performance from an embodied perspective and offering a narrative-style review of the main literature on the topic. Drawing from both older theoretical frameworks on motor learning and more recent theories on sensorimotor coupling and integration, this paper seeks to challenge and juxtapose established ideas with contemporary views inspired by recent work on embodied cognitive science. By doing so we advocate a centripetal approach to music performance, contrasting the prevalent centrifugal perspective: the sounds produced during performance not only originate from bodily action (centrifugal), but also cyclically return to it (centripetal). This perspective suggests that playing music involves a dynamic integration of both external and internal factors, transcending mere output-oriented actions and revealing music performance as a form of knowledge acquisition based on real-time sensorimotor experience.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Unit, KU Leuven, Leuven, Belgium
- Department of Musicology, IPEM, Ghent University, Ghent, Belgium
| | - Andrea Schiavio
- School of Arts and Creative Technologies, University of York, York, United Kingdom
| |
Collapse
|
28
|
Karunathilake IMD, Brodbeck C, Bhattasali S, Resnik P, Simon JZ. Neural Dynamics of the Processing of Speech Features: Evidence for a Progression of Features from Acoustic to Sentential Processing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.02.578603. [PMID: 38352332 PMCID: PMC10862830 DOI: 10.1101/2024.02.02.578603] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/22/2024]
Abstract
When we listen to speech, our brain's neurophysiological responses "track" its acoustic features, but it is less well understood how these auditory responses are modulated by linguistic content. Here, we recorded magnetoencephalography (MEG) responses while subjects listened to four types of continuous-speech-like passages: speech-envelope modulated noise, English-like non-words, scrambled words, and narrative passage. Temporal response function (TRF) analysis provides strong neural evidence for the emergent features of speech processing in cortex, from acoustics to higher-level linguistics, as incremental steps in neural speech processing. Critically, we show a stepwise hierarchical progression of progressively higher order features over time, reflected in both bottom-up (early) and top-down (late) processing stages. Linguistically driven top-down mechanisms take the form of late N400-like responses, suggesting a central role of predictive coding mechanisms at multiple levels. As expected, the neural processing of lower-level acoustic feature responses is bilateral or right lateralized, with left lateralization emerging only for lexical-semantic features. Finally, our results identify potential neural markers of the computations underlying speech perception and comprehension.
Collapse
Affiliation(s)
| | - Christian Brodbeck
- Department of Computing and Software, McMaster University, Hamilton, ON, Canada
| | - Shohini Bhattasali
- Department of Language Studies, University of Toronto, Scarborough, Canada
| | - Philip Resnik
- Department of Linguistics and Institute for Advanced Computer Studies, University of Maryland, College Park, MD, USA
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA
- Department of Biology, University of Maryland, College Park, MD, USA
- Institute for Systems Research, University of Maryland, College Park, MD, USA
| |
Collapse
|
29
|
Guérineau C, Broseghini A, Lõoke M, Dehesh G, Mongillo P, Marinelli L. Determining Hearing Thresholds in Dogs Using the Staircase Method. Vet Sci 2024; 11:67. [PMID: 38393085 PMCID: PMC10892234 DOI: 10.3390/vetsci11020067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2023] [Revised: 12/13/2023] [Accepted: 01/29/2024] [Indexed: 02/25/2024] Open
Abstract
There is a growing interest in performing playback experiments to understand which acoustical cues trigger specific behavioral/emotional responses in dogs. However, very limited studies have focused their attention on more basic aspects of hearing such as sensitivity, i.e., the identification of minimal intensity thresholds across different frequencies. Most previous studies relied on electrophysiological methods for audiograms for dogs, but these methods are considered less accurate than assessments based on behavioral responses. To our knowledge, only one study has established hearing thresholds using a behavioral assessment on four dogs but using a method that did not allow potential improvement throughout the sessions. In the present study, we devised an assessment procedure based on a staircase method. Implying the adaptation of the assessed intensity on the dogs' performance, this approach grants several assessments around the actual hearing threshold of the animal, thereby increasing the reliability of the result. We used such a method to determine hearing thresholds at three frequencies (0.5, 4.0, and 20.0 kHz). Five dogs were tested in each frequency. The hearing thresholds were found to be 19.5 ± 2.8 dB SPL at 0.5 kHz, 14.0 ± 4.5 dB SPL at 4.0 kHz, and 8.5 ± 12.8 dB SPL at 20.0 kHz. No improvement in performance was visible across the procedure. While the thresholds at 0.5 and 4.0 kHz were in line with the previous literature, the threshold at 20 kHz was remarkably lower than expected. Dogs' ability to produce vocalization beyond 20 kHz, potentially used in short-range communication, and the selective pressure linked to intraspecific communication in social canids are discussed as potential explanations for the sensitivity to higher frequencies.
Collapse
Affiliation(s)
- Cécile Guérineau
- Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, Italy; (C.G.); (A.B.); (M.L.); (L.M.)
| | - Anna Broseghini
- Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, Italy; (C.G.); (A.B.); (M.L.); (L.M.)
| | - Miina Lõoke
- Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, Italy; (C.G.); (A.B.); (M.L.); (L.M.)
| | - Giulio Dehesh
- Independent Researcher, Via Chiesanuova 139, 35136 Padova, Italy;
| | - Paolo Mongillo
- Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, Italy; (C.G.); (A.B.); (M.L.); (L.M.)
| | - Lieta Marinelli
- Dipartimento di Biomedicina Comparata e Alimentazione, University of Padova, Viale dell’Università 16, 35020 Legnaro, Italy; (C.G.); (A.B.); (M.L.); (L.M.)
| |
Collapse
|
30
|
Zoefel B, Kösem A. Neural tracking of continuous acoustics: properties, speech-specificity and open questions. Eur J Neurosci 2024; 59:394-414. [PMID: 38151889 DOI: 10.1111/ejn.16221] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 11/17/2023] [Accepted: 11/22/2023] [Indexed: 12/29/2023]
Abstract
Human speech is a particularly relevant acoustic stimulus for our species, due to its role of information transmission during communication. Speech is inherently a dynamic signal, and a recent line of research focused on neural activity following the temporal structure of speech. We review findings that characterise neural dynamics in the processing of continuous acoustics and that allow us to compare these dynamics with temporal aspects in human speech. We highlight properties and constraints that both neural and speech dynamics have, suggesting that auditory neural systems are optimised to process human speech. We then discuss the speech-specificity of neural dynamics and their potential mechanistic origins and summarise open questions in the field.
Collapse
Affiliation(s)
- Benedikt Zoefel
- Centre de Recherche Cerveau et Cognition (CerCo), CNRS UMR 5549, Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Anne Kösem
- Lyon Neuroscience Research Center (CRNL), INSERM U1028, Bron, France
| |
Collapse
|
31
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
32
|
Lei VLC, Leong TI, Leong CT, Liu L, Choi CU, Sereno MI, Li D, Huang R. Phase-encoded fMRI tracks down brainstorms of natural language processing with subsecond precision. Hum Brain Mapp 2024; 45:e26617. [PMID: 38339788 PMCID: PMC10858339 DOI: 10.1002/hbm.26617] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2023] [Revised: 12/04/2023] [Accepted: 01/21/2024] [Indexed: 02/12/2024] Open
Abstract
Natural language processing unfolds information overtime as spatially separated, multimodal, and interconnected neural processes. Existing noninvasive subtraction-based neuroimaging techniques cannot simultaneously achieve the spatial and temporal resolutions required to visualize ongoing information flows across the whole brain. Here we have developed rapid phase-encoded designs to fully exploit the temporal information latent in functional magnetic resonance imaging data, as well as overcoming scanner noise and head-motion challenges during overt language tasks. We captured real-time information flows as coherent hemodynamic waves traveling over the cortical surface during listening, reading aloud, reciting, and oral cross-language interpreting tasks. We were able to observe the timing, location, direction, and surge of traveling waves in all language tasks, which were visualized as "brainstorms" on brain "weather" maps. The paths of hemodynamic traveling waves provide direct evidence for dual-stream models of the visual and auditory systems as well as logistics models for crossmodal and cross-language processing. Specifically, we have tracked down the step-by-step processing of written or spoken sentences first being received and processed by the visual or auditory streams, carried across language and domain-general cognitive regions, and finally delivered as overt speeches monitored through the auditory cortex, which gives a complete picture of information flows across the brain during natural language functioning. PRACTITIONER POINTS: Phase-encoded fMRI enables simultaneous imaging of high spatial and temporal resolution, capturing continuous spatiotemporal dynamics of the entire brain during real-time overt natural language tasks. Spatiotemporal traveling wave patterns provide direct evidence for constructing comprehensive and explicit models of human information processing. This study unlocks the potential of applying rapid phase-encoded fMRI to indirectly track the underlying neural information flows of sequential sensory, motor, and high-order cognitive processes.
Collapse
Affiliation(s)
- Victoria Lai Cheng Lei
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Teng Ieng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Cheok Teng Leong
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Lili Liu
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| | - Chi Un Choi
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
| | - Martin I. Sereno
- Department of PsychologySan Diego State UniversitySan DiegoCaliforniaUSA
| | - Defeng Li
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Arts and HumanitiesUniversity of MacauTaipaChina
| | - Ruey‐Song Huang
- Centre for Cognitive and Brain SciencesUniversity of MacauTaipaChina
- Faculty of Science and TechnologyUniversity of MacauTaipaChina
| |
Collapse
|
33
|
Liu H, Bai Y, Xu Z, Liu J, Ni G, Ming D. The scalp time-varying network of auditory spatial attention in "cocktail-party" situations. Hear Res 2024; 442:108946. [PMID: 38150794 DOI: 10.1016/j.heares.2023.108946] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2023] [Revised: 12/21/2023] [Accepted: 12/22/2023] [Indexed: 12/29/2023]
Abstract
Sound source localization in "cocktail-party" situations is a remarkable ability of the human auditory system. However, the neural mechanisms underlying auditory spatial attention are still largely unknown. In this study, the "cocktail-party" situations are simulated through multiple sound sources and presented through head-related transfer functions and headphones. Furthermore, the scalp time-varying network of auditory spatial attention is constructed using the high-temporal resolution electroencephalogram, and its network properties are measured quantitatively using graph theory analysis. The results show that the time-varying network of auditory spatial attention in "cocktail-party" situations is more complex and partially different than in simple acoustic situations, especially in the early- and middle-latency periods. The network coupling strength increases continuously over time, and the network hub shifts from the posterior temporal lobe to the parietal lobe and then to the frontal lobe region. In addition, the right hemisphere has a stronger network strength for processing auditory spatial information in "cocktail-party" situations, i.e., the right hemisphere has higher clustering levels, higher transmission efficiency, and more node degrees during the early- and middle-latency periods, while this phenomenon disappears and appears symmetrically during the late-latency period. These findings reveal different network patterns and properties of auditory spatial attention in "cocktail-party" situations during different periods and demonstrate the dominance of the right hemisphere in the dynamic processing of auditory spatial information.
Collapse
Affiliation(s)
- Hongxing Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Yanru Bai
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Jihan Liu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China
| | - Guangjian Ni
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China.
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072 China; Tianjin Key Laboratory of Brain Science and Neuroengineering, Tianjin 300072 China; Haihe Laboratory of Brain-Computer Interaction and Human-Machine Integration, Tianjin 300392 China
| |
Collapse
|
34
|
Luo J, Qin P, Bi Q, Wu K, Gong G. Individual variability in functional connectivity of human auditory cortex. Cereb Cortex 2024; 34:bhae007. [PMID: 38282455 DOI: 10.1093/cercor/bhae007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 01/04/2024] [Accepted: 01/05/2024] [Indexed: 01/30/2024] Open
Abstract
Individual variability in functional connectivity underlies individual differences in cognition and behaviors, yet its association with functional specialization in the auditory cortex remains elusive. Using resting-state functional magnetic resonance imaging data from the Human Connectome Project, this study was designed to investigate the spatial distribution of auditory cortex individual variability in its whole-brain functional network architecture. An inherent hierarchical axis of the variability was discerned, which radiates from the medial to lateral orientation, with the left auditory cortex demonstrating more pronounced variations than the right. This variability exhibited a significant correlation with the variations in structural and functional metrics in the auditory cortex. Four auditory cortex subregions, which were identified from a clustering analysis based on this variability, exhibited unique connectional fingerprints and cognitive maps, with certain subregions showing specificity to speech perception functional activation. Moreover, the lateralization of the connectional fingerprint exhibited a U-shaped trajectory across the subregions. These findings emphasize the role of individual variability in functional connectivity in understanding cortical functional organization, as well as in revealing its association with functional specialization from the activation, connectome, and cognition perspectives.
Collapse
Affiliation(s)
- Junhao Luo
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Peipei Qin
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Qiuhui Bi
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- School of Artificial Intelligence, Beijing Normal University, Beijing 100875, China
| | - Ke Wu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| | - Gaolang Gong
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
- Beijing Key Laboratory of Brain Imaging and Connectomics, Beijing Normal University, Beijing 100875, China
- Chinese Institute for Brain Research, Beijing 102206, China
| |
Collapse
|
35
|
Nourski KV, Steinschneider M, Rhone AE, Berger JI, Dappen ER, Kawasaki H, Howard III MA. Intracranial electrophysiology of spectrally degraded speech in the human cortex. Front Hum Neurosci 2024; 17:1334742. [PMID: 38318272 PMCID: PMC10839784 DOI: 10.3389/fnhum.2023.1334742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 12/28/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Cochlear implants (CIs) are the treatment of choice for severe to profound hearing loss. Variability in CI outcomes remains despite advances in technology and is attributed in part to differences in cortical processing. Studying these differences in CI users is technically challenging. Spectrally degraded stimuli presented to normal-hearing individuals approximate input to the central auditory system in CI users. This study used intracranial electroencephalography (iEEG) to investigate cortical processing of spectrally degraded speech. Methods Participants were adult neurosurgical epilepsy patients. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands) or presented without vocoding. The stimuli were presented in a two-alternative forced choice task. Cortical activity was recorded using depth and subdural iEEG electrodes. Electrode coverage included auditory core in posteromedial Heschl's gyrus (HGPM), superior temporal gyrus (STG), ventral and dorsal auditory-related areas, and prefrontal and sensorimotor cortex. Analysis focused on high gamma (70-150 Hz) power augmentation and alpha (8-14 Hz) suppression. Results Chance task performance occurred with 1-2 spectral bands and was near-ceiling for clear stimuli. Performance was variable with 3-4 bands, permitting identification of good and poor performers. There was no relationship between task performance and participants demographic, audiometric, neuropsychological, or clinical profiles. Several response patterns were identified based on magnitude and differences between stimulus conditions. HGPM responded strongly to all stimuli. A preference for clear speech emerged within non-core auditory cortex. Good performers typically had strong responses to all stimuli along the dorsal stream, including posterior STG, supramarginal, and precentral gyrus; a minority of sites in STG and supramarginal gyrus had a preference for vocoded stimuli. In poor performers, responses were typically restricted to clear speech. Alpha suppression was more pronounced in good performers. In contrast, poor performers exhibited a greater involvement of posterior middle temporal gyrus when listening to clear speech. Discussion Responses to noise-vocoded speech provide insights into potential factors underlying CI outcome variability. The results emphasize differences in the balance of neural processing along the dorsal and ventral stream between good and poor performers, identify specific cortical regions that may have diagnostic and prognostic utility, and suggest potential targets for neuromodulation-based CI rehabilitation strategies.
Collapse
Affiliation(s)
- Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Ariane E. Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Joel I. Berger
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Emily R. Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Matthew A. Howard III
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
- Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, United States
| |
Collapse
|
36
|
Bianco R, Zuk NJ, Bigand F, Quarta E, Grasso S, Arnese F, Ravignani A, Battaglia-Mayer A, Novembre G. Neural encoding of musical expectations in a non-human primate. Curr Biol 2024; 34:444-450.e5. [PMID: 38176416 DOI: 10.1016/j.cub.2023.12.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 10/26/2023] [Accepted: 12/07/2023] [Indexed: 01/06/2024]
Abstract
The appreciation of music is a universal trait of humankind.1,2,3 Evidence supporting this notion includes the ubiquity of music across cultures4,5,6,7 and the natural predisposition toward music that humans display early in development.8,9,10 Are we musical animals because of species-specific predispositions? This question cannot be answered by relying on cross-cultural or developmental studies alone, as these cannot rule out enculturation.11 Instead, it calls for cross-species experiments testing whether homologous neural mechanisms underlying music perception are present in non-human primates. We present music to two rhesus monkeys, reared without musical exposure, while recording electroencephalography (EEG) and pupillometry. Monkeys exhibit higher engagement and neural encoding of expectations based on the previously seeded musical context when passively listening to real music as opposed to shuffled controls. We then compare human and monkey neural responses to the same stimuli and find a species-dependent contribution of two fundamental musical features-pitch and timing12-in generating expectations: while timing- and pitch-based expectations13 are similarly weighted in humans, monkeys rely on timing rather than pitch. Together, these results shed light on the phylogeny of music perception. They highlight monkeys' capacity for processing temporal structures beyond plain acoustic processing, and they identify a species-dependent contribution of time- and pitch-related features to the neural encoding of musical expectations.
Collapse
Affiliation(s)
- Roberta Bianco
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| | - Nathaniel J Zuk
- Department of Psychology, Nottingham Trent University, 50 Shakespeare Street, Nottingham NG1 4FQ, UK
| | - Félix Bigand
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Eros Quarta
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Stefano Grasso
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Flavia Arnese
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy
| | - Andrea Ravignani
- Comparative Bioacoustics Group, Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD Nijmegen, the Netherlands; Center for Music in the Brain, Department of Clinical Medicine, Aarhus University & The Royal Academy of Music, Universitetsbyen 3, 8000 Aarhus, Denmark; Department of Human Neurosciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Alexandra Battaglia-Mayer
- Department of Physiology and Pharmacology, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Giacomo Novembre
- Neuroscience of Perception & Action Lab, Italian Institute of Technology, Viale Regina Elena 291, 00161 Rome, Italy.
| |
Collapse
|
37
|
Gjini K, Casey C, Kunkel D, Her M, Banks MI, Pearce RA, Lennertz R, Sanders RD. Delirium is associated with loss of feedback cortical connectivity. Alzheimers Dement 2024; 20:511-524. [PMID: 37695013 PMCID: PMC10840828 DOI: 10.1002/alz.13471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Revised: 08/10/2023] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
INTRODUCTION Post-operative delirium (POD) is associated with increased morbidity and mortality but is bereft of treatments, largely due to our limited understanding of the underlying pathophysiology. We hypothesized that delirium reflects a disturbance in cortical connectivity that leads to altered predictions of the sensory environment. METHODS High-density electroencephalogram recordings during an oddball auditory roving paradigm were collected from 131 patients. Dynamic causal modeling (DCM) analysis facilitated inference about the neuronal connectivity and inhibition-excitation dynamics underlying auditory-evoked responses. RESULTS Mismatch negativity amplitudes were smaller in patients with POD. DCM showed that delirium was associated with decreased left-sided superior temporal gyrus (l-STG) to auditory cortex feedback connectivity. Feedback connectivity also negatively correlated with delirium severity and systemic inflammation. Increased inhibition of l-STG, with consequent decreases in feed-forward and feed-back connectivity, occurred for oddball tones during delirium. DISCUSSION Delirium is associated with decreased feedback cortical connectivity, possibly resulting from increased intrinsic inhibitory tone. HIGHLIGHTS Mismatch negativity amplitude was reduced in patients with delirium. Patients with postoperative delirium had increased feedforward connectivity before surgery. Feedback connectivity was diminished from left-side superior temporal gyrus to left primary auditory sensory area during delirium. Feedback connectivity inversely correlated with inflammation and delirium severity.
Collapse
Affiliation(s)
- Klevest Gjini
- Department of NeurologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Cameron Casey
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
- Pediatric Neuromodulation Laboratory, Waisman CenterUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - David Kunkel
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Maihlee Her
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Matthew I. Banks
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
- Department of NeuroscienceUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Robert A. Pearce
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Richard Lennertz
- Department of AnesthesiologyUniversity of Wisconsin–MadisonMadisonWisconsinUSA
| | - Robert D. Sanders
- Department of Anaesthetics & Institute of Academic SurgeryRoyal Prince Alfred HospitalCamperdownNew South WalesAustralia
- NHMRC Clinical Trials Centre and Central Clinical SchoolUniversity of SydneyCamperdownNew South WalesAustralia
| |
Collapse
|
38
|
Wang S, Chen Y, Liu Y, Yang L, Wang Y, Fu X, Hu J, Pugh E, Wang S. Aging effects on dual-route speech processing networks during speech perception in noise. Hum Brain Mapp 2024; 45:e26577. [PMID: 38224542 PMCID: PMC10789214 DOI: 10.1002/hbm.26577] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Revised: 11/28/2023] [Accepted: 12/16/2023] [Indexed: 01/17/2024] Open
Abstract
Healthy aging leads to complex changes in the functional network of speech processing in a noisy environment. The dual-route neural architecture has been applied to the study of speech processing. Although evidence suggests that senescent increases activity in the brain regions across the dorsal and ventral stream regions to offset reduced periphery, the regulatory mechanism of dual-route functional networks underlying such compensation remains largely unknown. Here, by utilizing functional near-infrared spectroscopy (fNIRS), we investigated the compensatory mechanism of the dual-route functional connectivity, and its relationship with healthy aging by using a speech perception task at varying signal-to-noise ratios (SNR) in healthy individuals (young adults, middle-aged adults, and older adults). Results showed that the speech perception scores showed a significant age-related decrease with the reduction of the SNR. The analysis results of dual-route speech processing networks showed that the functional connection of Wernicke's area and homolog Wernicke's area were age-related increases. Further to clarify the age-related characteristics of the dual-route speech processing networks, graph-theoretical network analysis revealed an age-related increase in the efficiency of the networks, and the age-related differences in nodal characteristics were found both in Wernicke's area and homolog Wernicke's area under noise environment. Thus, Wernicke's area might be a key network hub to maintain efficient information transfer across the speech process network with healthy aging. Moreover, older adults would recruit more resources from the homologous Wernicke's area in a noisy environment. The recruitment of the homolog of Wernicke's area might provide a means of compensation for older adults for decoding speech in an adverse listening environment. Together, our results characterized dual-route speech processing networks at varying noise environments and provided new insight for the compensatory theories of how aging modulates the dual-route speech processing functional networks.
Collapse
Affiliation(s)
- Songjian Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Younuo Chen
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Yi Liu
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Liu Yang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Yuan Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Xinxing Fu
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| | - Jiong Hu
- Department of AudiologyUniversity of the PacificSan FranciscoCaliforniaUSA
| | | | - Shuo Wang
- Beijing Institute of Otolaryngology, Otolaryngology‐Head and Neck SurgeryKey Laboratory of Otolaryngology Head and Neck Surgery (Capital Medical University), Ministry of Education, Beijing Tongren Hospital, Capital Medical UniversityBeijingChina
| |
Collapse
|
39
|
Tuckute G, Feather J, Boebinger D, McDermott JH. Many but not all deep neural network audio models capture brain responses and exhibit correspondence between model stages and brain regions. PLoS Biol 2023; 21:e3002366. [PMID: 38091351 PMCID: PMC10718467 DOI: 10.1371/journal.pbio.3002366] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 10/06/2023] [Indexed: 12/18/2023] Open
Abstract
Models that predict brain responses to stimuli provide one measure of understanding of a sensory system and have many potential applications in science and engineering. Deep artificial neural networks have emerged as the leading such predictive models of the visual system but are less explored in audition. Prior work provided examples of audio-trained neural networks that produced good predictions of auditory cortical fMRI responses and exhibited correspondence between model stages and brain regions, but left it unclear whether these results generalize to other neural network models and, thus, how to further improve models in this domain. We evaluated model-brain correspondence for publicly available audio neural network models along with in-house models trained on 4 different tasks. Most tested models outpredicted standard spectromporal filter-bank models of auditory cortex and exhibited systematic model-brain correspondence: Middle stages best predicted primary auditory cortex, while deep stages best predicted non-primary cortex. However, some state-of-the-art models produced substantially worse brain predictions. Models trained to recognize speech in background noise produced better brain predictions than models trained to recognize speech in quiet, potentially because hearing in noise imposes constraints on biological auditory representations. The training task influenced the prediction quality for specific cortical tuning properties, with best overall predictions resulting from models trained on multiple tasks. The results generally support the promise of deep neural networks as models of audition, though they also indicate that current models do not explain auditory cortical responses in their entirety.
Collapse
Affiliation(s)
- Greta Tuckute
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Jenelle Feather
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
| | - Dana Boebinger
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
- University of Rochester Medical Center, Rochester, New York, New York, United States of America
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, McGovern Institute for Brain Research MIT, Cambridge, Massachusetts, United States of America
- Center for Brains, Minds, and Machines, MIT, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard, Cambridge, Massachusetts, United States of America
| |
Collapse
|
40
|
Hallett M. Medial-lateral organization of primary auditory cortex and the question of sound localization. J Comp Neurol 2023; 531:1893-1896. [PMID: 37357573 PMCID: PMC10749981 DOI: 10.1002/cne.25516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2022] [Revised: 05/14/2023] [Accepted: 05/18/2023] [Indexed: 06/27/2023]
Abstract
Pandya made many important contributions to the understanding of the anatomy of the cortical auditory pathways beginning with his publication in 1969. This review focuses on the observation in that article on the transcallosal connections of the primary auditory cortex. The medial part of the cortex has such connections, but the lateral part does not. Pandya and colleagues speculated that this might have something to do with spatial localization of sound. Review of the subsequent literature shows that the primary auditory cortex anatomy is complex, but the original observation is likely correct. However, the physiological speculation was not.
Collapse
Affiliation(s)
- Mark Hallett
- National Institute of Neurological Disorders and Stroke, NIH, Bethesda
| |
Collapse
|
41
|
Rauschecker JP, Afsahi RK. Anatomy of the auditory cortex then and now. J Comp Neurol 2023; 531:1883-1892. [PMID: 38010215 PMCID: PMC10872810 DOI: 10.1002/cne.25560] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 08/29/2023] [Accepted: 10/13/2023] [Indexed: 11/29/2023]
Abstract
Using neuroanatomical investigations in the macaque, Deepak Pandya and his colleagues have established the framework for auditory cortex organization, with subdivisions into core and belt areas. This has aided subsequent neurophysiological and imaging studies in monkeys and humans, and a nomenclature building on Pandya's work has also been adopted by the Human Connectome Project. The foundational work by Pandya and his colleagues is highlighted here in the context of subsequent and ongoing studies on the functional anatomy and physiology of auditory cortex in primates, including humans, and their relevance for understanding cognitive aspects of speech and language.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| | - Rosstin K Afsahi
- Department of Neuroscience, Georgetown University Medical Center, Washington, District of Columbia, USA
| |
Collapse
|
42
|
Alho J, Samuelsson JG, Khan S, Mamashli F, Bharadwaj H, Losh A, McGuiggan NM, Graham S, Nayal Z, Perrachione TK, Joseph RM, Stoodley CJ, Hämäläinen MS, Kenet T. Both stronger and weaker cerebro-cerebellar functional connectivity patterns during processing of spoken sentences in autism spectrum disorder. Hum Brain Mapp 2023; 44:5810-5827. [PMID: 37688547 PMCID: PMC10619366 DOI: 10.1002/hbm.26478] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 08/11/2023] [Accepted: 08/20/2023] [Indexed: 09/11/2023] Open
Abstract
Cerebellar differences have long been documented in autism spectrum disorder (ASD), yet the extent to which such differences might impact language processing in ASD remains unknown. To investigate this, we recorded brain activity with magnetoencephalography (MEG) while ASD and age-matched typically developing (TD) children passively processed spoken meaningful English and meaningless Jabberwocky sentences. Using a novel source localization approach that allows higher resolution MEG source localization of cerebellar activity, we found that, unlike TD children, ASD children showed no difference between evoked responses to meaningful versus meaningless sentences in right cerebellar lobule VI. ASD children also had atypically weak functional connectivity in the meaningful versus meaningless speech condition between right cerebellar lobule VI and several left-hemisphere sensorimotor and language regions in later time windows. In contrast, ASD children had atypically strong functional connectivity for in the meaningful versus meaningless speech condition between right cerebellar lobule VI and primary auditory cortical areas in an earlier time window. The atypical functional connectivity patterns in ASD correlated with ASD severity and the ability to inhibit involuntary attention. These findings align with a model where cerebro-cerebellar speech processing mechanisms in ASD are impacted by aberrant stimulus-driven attention, which could result from atypical temporal information and predictions of auditory sensory events by right cerebellar lobule VI.
Collapse
Affiliation(s)
- Jussi Alho
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - John G. Samuelsson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Harvard‐MIT Division of Health Sciences and Technology, Massachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Sheraz Khan
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Hari Bharadwaj
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of Speech, Language, and Hearing Sciences, and Weldon School of Biomedical EngineeringPurdue UniversityWest LafayetteIndianaUSA
| | - Ainsley Losh
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Nicole M. McGuiggan
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Steven Graham
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Zein Nayal
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Tyler K. Perrachione
- Department of Speech, Language, and Hearing SciencesBoston UniversityBostonMassachusettsUSA
| | - Robert M. Joseph
- Department of Anatomy and NeurobiologyBoston University School of MedicineBostonMassachusettsUSA
| | - Catherine J. Stoodley
- Department of PsychologyCollege of Arts and Sciences, American UniversityWashingtonDCUSA
| | - Matti S. Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Department of RadiologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| | - Tal Kenet
- Department of NeurologyMassachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|
43
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
44
|
Harris I, Niven EC, Griffin A, Scott SK. Is song processing distinct and special in the auditory cortex? Nat Rev Neurosci 2023; 24:711-722. [PMID: 37783820 DOI: 10.1038/s41583-023-00743-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/30/2023] [Indexed: 10/04/2023]
Abstract
Is the singing voice processed distinctively in the human brain? In this Perspective, we discuss what might distinguish song processing from speech processing in light of recent work suggesting that some cortical neuronal populations respond selectively to song and we outline the implications for our understanding of auditory processing. We review the literature regarding the neural and physiological mechanisms of song production and perception and show that this provides evidence for key differences between song and speech processing. We conclude by discussing the significance of the notion that song processing is special in terms of how this might contribute to theories of the neurobiological origins of vocal communication and to our understanding of the neural circuitry underlying sound processing in the human cortex.
Collapse
Affiliation(s)
- Ilana Harris
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Efe C Niven
- Institute of Cognitive Neuroscience, University College London, London, UK
| | - Alex Griffin
- Department of Psychology, University of Cambridge, Cambridge, UK
| | - Sophie K Scott
- Institute of Cognitive Neuroscience, University College London, London, UK.
| |
Collapse
|
45
|
Miceli G, Caccia A. The Auditory Agnosias: a Short Review of Neurofunctional Evidence. Curr Neurol Neurosci Rep 2023; 23:671-679. [PMID: 37747655 PMCID: PMC10673750 DOI: 10.1007/s11910-023-01302-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2023] [Indexed: 09/26/2023]
Abstract
PURPOSE OF REVIEW To investigate the neurofunctional correlates of pure auditory agnosia and its varieties (global, verbal, and nonverbal), based on 116 anatomoclinical reports published between 1893 and 2022, with emphasis on hemispheric lateralization, intrahemispheric lesion site, underlying cognitive impairments. RECENT FINDINGS Pure auditory agnosia is rare, and observations accumulate slowly. Recent patient reports and neuroimaging studies on neurotypical subjects offer insights into the putative mechanisms underlying auditory agnosia, while challenging traditional accounts. Global auditory agnosia frequently results from bilateral temporal damage. Verbal auditory agnosia strictly correlates with language-dominant hemisphere lesions. Damage involves the auditory pathways, but the critical lesion site is unclear. Both the auditory cortex and associative areas are reasonable candidates, but cases resulting from brainstem damage are on record. The hemispheric correlates of nonverbal auditory input disorders are less clear. They correlate with unilateral damage to either hemisphere, but evidence is scarce. Based on published cases, pure auditory agnosias are neurologically and functionally heterogeneous. Phenotypes are influenced by co-occurring cognitive impairments. Future studies should start from these facts and integrate patient data and studies in neurotypical individuals.
Collapse
Affiliation(s)
- Gabriele Miceli
- Professor of Neurology, Center for Mind/Brain Studies, University of Trento, Trento, Italy.
| | | |
Collapse
|
46
|
Wang R, Chen X, Khalilian-Gourtani A, Yu L, Dugan P, Friedman D, Doyle W, Devinsky O, Wang Y, Flinker A. Distributed feedforward and feedback cortical processing supports human speech production. Proc Natl Acad Sci U S A 2023; 120:e2300255120. [PMID: 37819985 PMCID: PMC10589651 DOI: 10.1073/pnas.2300255120] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 07/22/2023] [Indexed: 10/13/2023] Open
Abstract
Speech production is a complex human function requiring continuous feedforward commands together with reafferent feedback processing. These processes are carried out by distinct frontal and temporal cortical networks, but the degree and timing of their recruitment and dynamics remain poorly understood. We present a deep learning architecture that translates neural signals recorded directly from the cortex to an interpretable representational space that can reconstruct speech. We leverage learned decoding networks to disentangle feedforward vs. feedback processing. Unlike prevailing models, we find a mixed cortical architecture in which frontal and temporal networks each process both feedforward and feedback information in tandem. We elucidate the timing of feedforward and feedback-related processing by quantifying the derived receptive fields. Our approach provides evidence for a surprisingly mixed cortical architecture of speech circuitry together with decoding advances that have important implications for neural prosthetics.
Collapse
Affiliation(s)
- Ran Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | - Xupeng Chen
- Electrical and Computer Engineering Department, New York University, New York, NY11201
| | | | - Leyao Yu
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Patricia Dugan
- Neurology Department, New York University, New York, NY10016
| | - Daniel Friedman
- Neurology Department, New York University, New York, NY10016
| | - Werner Doyle
- Neurosurgery Department, New York University, New York, NY10016
| | - Orrin Devinsky
- Neurology Department, New York University, New York, NY10016
| | - Yao Wang
- Electrical and Computer Engineering Department, New York University, New York, NY11201
- Biomedical Engineering Department, New York University, New York, NY11201
| | - Adeen Flinker
- Neurology Department, New York University, New York, NY10016
- Biomedical Engineering Department, New York University, New York, NY11201
| |
Collapse
|
47
|
Papanicolaou AC. Non-Invasive Mapping of the Neuronal Networks of Language. Brain Sci 2023; 13:1457. [PMID: 37891824 PMCID: PMC10605023 DOI: 10.3390/brainsci13101457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/13/2023] [Accepted: 10/05/2023] [Indexed: 10/29/2023] Open
Abstract
This review consists of three main sections. In the first, the Introduction, the main theories of the neuronal mediation of linguistic operations, derived mostly from studies of the effects of focal lesions on linguistic performance, are summarized. These models furnish the conceptual framework on which the design of subsequent functional neuroimaging investigations is based. In the second section, the methods of functional neuroimaging, especially those of functional Magnetic Resonance Imaging (fMRI) and of Magnetoencephalography (MEG), are detailed along with the specific activation tasks employed in presurgical functional mapping. The reliability of these non-invasive methods and their validity, judged against the results of the invasive methods, namely, the "Wada" procedure and Cortical Stimulation Mapping (CSM), is assessed and their use in presurgical mapping is justified. In the third and final section, the applications of fMRI and MEG in basic research are surveyed in the following six sub-sections, each dealing with the assessment of the neuronal networks for (1) the acoustic and phonological, (2) for semantic, (3) for syntactic, (4) for prosodic operations, (5) for sign language and (6) for the operations of reading and the mechanisms of dyslexia.
Collapse
Affiliation(s)
- Andrew C Papanicolaou
- Department of Pediatrics, Division of Pediatric Neurology, College of Medicine, University of Tennessee Health Science Center, Memphis, TN 38013, USA
| |
Collapse
|
48
|
Grijseels DM, Prendergast BJ, Gorman JC, Miller CT. The neurobiology of vocal communication in marmosets. Ann N Y Acad Sci 2023; 1528:13-28. [PMID: 37615212 PMCID: PMC10592205 DOI: 10.1111/nyas.15057] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
An increasingly popular animal model for studying the neural basis of social behavior, cognition, and communication is the common marmoset (Callithrix jacchus). Interest in this New World primate across neuroscience is now being driven by their proclivity for prosociality across their repertoire, high volubility, and rapid development, as well as their amenability to naturalistic testing paradigms and freely moving neural recording and imaging technologies. The complement of these characteristics set marmosets up to be a powerful model of the primate social brain in the years to come. Here, we focus on vocal communication because it is the area that has both made the most progress and illustrates the prodigious potential of this species. We review the current state of the field with a focus on the various brain areas and networks involved in vocal perception and production, comparing the findings from marmosets to other animals, including humans.
Collapse
Affiliation(s)
- Dori M Grijseels
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Brendan J Prendergast
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
| | - Julia C Gorman
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| | - Cory T Miller
- Cortical Systems and Behavior Laboratory, University of California, San Diego, La Jolla, California, USA
- Neurosciences Graduate Program, University of California, San Diego, La Jolla, California, USA
| |
Collapse
|
49
|
Friederici AD. Evolutionary neuroanatomical expansion of Broca's region serving a human-specific function. Trends Neurosci 2023; 46:786-796. [PMID: 37596132 DOI: 10.1016/j.tins.2023.07.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 06/23/2023] [Accepted: 07/20/2023] [Indexed: 08/20/2023]
Abstract
The question concerning the evolution of language is directly linked to the debate on whether language and action are dependent or not and to what extent Broca's region serves as a common neural basis. The debate resulted in two opposing views, one arguing for and one against the dependence of language and action mainly based on neuroscientific data. This article presents an evolutionary neuroanatomical framework which may offer a solution to this dispute. It is proposed that in humans, Broca's region houses language and action independently in spatially separated subregions. This became possible due to an evolutionary expansion of Broca's region in the human brain, which was not paralleled by a similar expansion in the chimpanzee's brain, providing additional space needed for the neural representation of language in humans.
Collapse
Affiliation(s)
- Angela D Friederici
- Max Planck Institute for Human Cognitive and Brain Sciences, Department of Neuropsychology, Stephanstraße 1A, 04103 Leipzig, Germany.
| |
Collapse
|
50
|
Reyes-Aguilar A, Licea-Haquet G, Arce BI, Giordano M. Contribution and functional connectivity between cerebrum and cerebellum on sub-lexical and lexical-semantic processing of verbs. PLoS One 2023; 18:e0291558. [PMID: 37708205 PMCID: PMC10501569 DOI: 10.1371/journal.pone.0291558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023] Open
Abstract
Language comprehension involves both sub-lexical (e.g., phonological) and lexical-semantic processing. We conducted a task using functional magnetic resonance imaging (fMRI) to compare the processing of verbs in these two domains. Additionally, we examined the representation of concrete-motor and abstract-non-motor concepts by including two semantic categories of verbs: motor and mental. The findings indicate that sub-lexical processing during the reading of pseudo-verbs primarily involves the left dorsal stream of the perisylvian network, while lexical-semantic representation during the reading of verbs predominantly engages the ventral stream. According to the embodied or grounded cognition approach, modality-specific mechanisms (such as sensory-motor systems) and the well-established multimodal left perisylvian network contribute to the semantic representation of both concrete and abstract verbs. Our study identified the visual system as a preferential modality-specific system for abstract-mental verbs, which exhibited functional connectivity with the right crus I/lobule VI of the cerebellum. Taken together, these results confirm the dissociation between sub-lexical and lexical-semantic processing and provide neurobiological evidence of functional coupling between specific visual modality regions and the right cerebellum, forming a network that supports the semantic representation of abstract concepts. Further, the results shed light on the underlying mechanisms of semantic processing and contribute to our understanding of how the brain processes abstract concepts.
Collapse
Affiliation(s)
- Azalea Reyes-Aguilar
- Department of Psychobiology and Neuroscience, Faculty of Psychology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Giovanna Licea-Haquet
- Department of Behavioral and Cognitive Neurobiology, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Queretaro, Mexico
| | - Brenda I. Arce
- Department of Psychobiology and Neuroscience, Faculty of Psychology, Universidad Nacional Autónoma de México, Mexico City, Mexico
| | - Magda Giordano
- Department of Behavioral and Cognitive Neurobiology, Instituto de Neurobiología, Universidad Nacional Autónoma de México, Queretaro, Mexico
| |
Collapse
|