1
|
Yi C, Li F, Wang J, Li Y, Zhang J, Chen W, Jiang L, Yao D, Xu P, He B, Dong W. Abnormal trial-to-trial variability in P300 time-varying directed eeg network of schizophrenia. Med Biol Eng Comput 2024:10.1007/s11517-024-03133-9. [PMID: 38834855 DOI: 10.1007/s11517-024-03133-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2023] [Accepted: 05/18/2024] [Indexed: 06/06/2024]
Abstract
Cognitive disturbance in identifying, processing, and responding to salient or novel stimuli are typical attributes of schizophrenia (SCH), and P300 has been proven to serve as a reliable psychosis endophenotype. The instability of neural processing across trials, i.e., trial-to-trial variability (TTV), is getting increasing attention in uncovering how the SCH "noisy" brain organizes during cognition processes. Nevertheless, the TTV in the brain network remains unrevealed, notably how it varies in different task stages. In this study, resorting to the time-varying directed electroencephalogram (EEG) network, we investigated the time-resolved TTV of the functional organizations subserving the evoking of P300. Results revealed anomalous TTV in time-varying networks across the delta, theta, alpha, beta1, and beta2 bands of SCH. The TTV of cross-band time-varying network properties can efficiently recognize SCH (accuracy: 83.39%, sensitivity: 89.22%, and specificity: 74.55%) and evaluate the psychiatric symptoms (i.e., Hamilton's depression scale-24, r = 0.430, p = 0.022, RMSE = 4.891; Hamilton's anxiety scale-14, r = 0.377, p = 0.048, RMSE = 4.575). Our study brings new insights into probing the time-resolved functional organization of the brain, and TTV in time-varying networks may provide a powerful tool for mining the substrates accounting for SCH and diagnostic evaluation of SCH.
Collapse
Affiliation(s)
- Chanlin Yi
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
- Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu, 2019RU035, China
- Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Jiuju Wang
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, 100191, China
| | - Yuqin Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Jiamin Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Wanjun Chen
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Lin Jiang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China
- Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, Chengdu, 2019RU035, China
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, 450001, China
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu, 611731, China.
- School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu, 611731, China.
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, 100191, China.
- Radiation Oncology Key Laboratory of Sichuan Province, Chengdu, 610041, China.
- Rehabilitation Center, Qilu Hospital of Shandong University, Jinan, 250012, China.
| | - Baoming He
- Department of Neurology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu, 610072, China.
- Chinese Academy of Sciences Sichuan Translational Medicine Research Hospital, Chengdu, 610072, China.
| | - Wentian Dong
- Peking University Sixth Hospital, Peking University Institute of Mental Health, NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, 100191, China.
| |
Collapse
|
2
|
Kim SG, De Martino F, Overath T. Linguistic modulation of the neural encoding of phonemes. Cereb Cortex 2024; 34:bhae155. [PMID: 38687241 PMCID: PMC11059272 DOI: 10.1093/cercor/bhae155] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 05/02/2024] Open
Abstract
Speech comprehension entails the neural mapping of the acoustic speech signal onto learned linguistic units. This acousto-linguistic transformation is bi-directional, whereby higher-level linguistic processes (e.g. semantics) modulate the acoustic analysis of individual linguistic units. Here, we investigated the cortical topography and linguistic modulation of the most fundamental linguistic unit, the phoneme. We presented natural speech and "phoneme quilts" (pseudo-randomly shuffled phonemes) in either a familiar (English) or unfamiliar (Korean) language to native English speakers while recording functional magnetic resonance imaging. This allowed us to dissociate the contribution of acoustic vs. linguistic processes toward phoneme analysis. We show that (i) the acoustic analysis of phonemes is modulated by linguistic analysis and (ii) that for this modulation, both of acoustic and phonetic information need to be incorporated. These results suggest that the linguistic modulation of cortical sensitivity to phoneme classes minimizes prediction error during natural speech perception, thereby aiding speech comprehension in challenging listening situations.
Collapse
Affiliation(s)
- Seung-Goo Kim
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Research Group Neurocognition of Music and Language, Max Planck Institute for Empirical Aesthetics, Grüneburgweg 14, Frankfurt am Main 60322, Germany
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, University of Maastricht, Universiteitssingel 40, 6229 ER Maastricht, Netherlands
| | - Tobias Overath
- Department of Psychology and Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Duke Institute for Brain Sciences, Duke University, 308 Research Dr, Durham, NC 27708, United States
- Center for Cognitive Neuroscience, Duke University, 308 Research Dr, Durham, NC 27708, United States
| |
Collapse
|
3
|
Hullett PW, Leonard MK, Gorno-Tempini ML, Mandelli ML, Chang EF. Parallel Encoding of Speech in Human Frontal and Temporal Lobes. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.03.19.585648. [PMID: 38562883 PMCID: PMC10983886 DOI: 10.1101/2024.03.19.585648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Models of speech perception are centered around a hierarchy in which auditory representations in the thalamus propagate to primary auditory cortex, then to the lateral temporal cortex, and finally through dorsal and ventral pathways to sites in the frontal lobe. However, evidence for short latency speech responses and low-level spectrotemporal representations in frontal cortex raises the question of whether speech-evoked activity in frontal cortex strictly reflects downstream processing from lateral temporal cortex or whether there are direct parallel pathways from the thalamus or primary auditory cortex to the frontal lobe that supplement the traditional hierarchical architecture. Here, we used high-density direct cortical recordings, high-resolution diffusion tractography, and hemodynamic functional connectivity to evaluate for evidence of direct parallel inputs to frontal cortex from low-level areas. We found that neural populations in the frontal lobe show speech-evoked responses that are synchronous or occur earlier than responses in the lateral temporal cortex. These short latency frontal lobe neural populations encode spectrotemporal speech content indistinguishable from spectrotemporal encoding patterns observed in the lateral temporal lobe, suggesting parallel auditory speech representations reaching temporal and frontal cortex simultaneously. This is further supported by white matter tractography and functional connectivity patterns that connect the auditory nucleus of the thalamus (medial geniculate body) and the primary auditory cortex to the frontal lobe. Together, these results support the existence of a robust pathway of parallel inputs from low-level auditory areas to frontal lobe targets and illustrate long-range parallel architecture that works alongside the classical hierarchical speech network model.
Collapse
|
4
|
Tomassini A, Cope TE, Zhang J, Rowe JB. Parkinson's disease impairs cortical sensori-motor decision-making cascades. Brain Commun 2024; 6:fcae065. [PMID: 38505233 PMCID: PMC10950052 DOI: 10.1093/braincomms/fcae065] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 08/21/2023] [Accepted: 03/12/2024] [Indexed: 03/21/2024] Open
Abstract
The transformation from perception to action requires a set of neuronal decisions about the nature of the percept, identification and selection of response options and execution of the appropriate motor response. The unfolding of such decisions is mediated by distributed representations of the decision variables-evidence and intentions-that are represented through oscillatory activity across the cortex. Here we combine magneto-electroencephalography and linear ballistic accumulator models of decision-making to reveal the impact of Parkinson's disease during the selection and execution of action. We used a visuomotor task in which we independently manipulated uncertainty in sensory and action domains. A generative accumulator model was optimized to single-trial neurophysiological correlates of human behaviour, mapping the cortical oscillatory signatures of decision-making, and relating these to separate processes accumulating sensory evidence and selecting a motor action. We confirmed the role of widespread beta oscillatory activity in shaping the feed-forward cascade of evidence accumulation from resolution of sensory inputs to selection of appropriate responses. By contrasting the spatiotemporal dynamics of evidence accumulation in age-matched healthy controls and people with Parkinson's disease, we identified disruption of the beta-mediated cascade of evidence accumulation as the hallmark of atypical decision-making in Parkinson's disease. In frontal cortical regions, there was inefficient processing and transfer of perceptual information. Our findings emphasize the intimate connection between abnormal visuomotor function and pathological oscillatory activity in neurodegenerative disease. We propose that disruption of the oscillatory mechanisms governing fast and precise information exchanges between the sensory and motor systems contributes to behavioural changes in people with Parkinson's disease.
Collapse
Affiliation(s)
- Alessandro Tomassini
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
| | - Thomas E Cope
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK
- Department of Neurology, Cambridge University Hospitals NHS Trust, Cambridge CB2 0QQ, UK
| | - Jiaxiang Zhang
- Department of Computer Science, Swansea University, Swansea SA18EN, UK
| | - James B Rowe
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK
- Department of Neurology, Cambridge University Hospitals NHS Trust, Cambridge CB2 0QQ, UK
| |
Collapse
|
5
|
Mamashli F, Khan S, Hatamimajoumerd E, Jas M, Uluç I, Lankinen K, Obleser J, Friederici AD, Maess B, Ahveninen J. Characterizing directional dynamics of semantic prediction based on inter-regional temporal generalization. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.02.13.580183. [PMID: 38405823 PMCID: PMC10888763 DOI: 10.1101/2024.02.13.580183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/27/2024]
Abstract
The event-related potential/field component N400(m) has been widely used as a neural index for semantic prediction. It has long been hypothesized that feedback information from inferior frontal areas plays a critical role in generating the N400. However, due to limitations in causal connectivity estimation, direct testing of this hypothesis has remained difficult. Here, magnetoencephalography (MEG) data was obtained during a classic N400 paradigm where the semantic predictability of a fixed target noun was manipulated in simple German sentences. To estimate causality, we implemented a novel approach based on machine learning and temporal generalization to estimate the effect of inferior frontal gyrus (IFG) on temporal areas. In this method, a support vector machine (SVM) classifier is trained on each time point of the neural activity in IFG to classify less predicted (LP) and highly predicted (HP) nouns and then tested on all time points of superior/middle temporal sub-regions activity (and vice versa, to establish spatio-temporal evidence for or against causality). The decoding accuracy was significantly above chance level when the classifier was trained on IFG activity and tested on future activity in superior and middle temporal gyrus (STG/MTG). The results present new evidence for a model predictive speech comprehension where predictive IFG activity is fed back to shape subsequent activity in STG/MTG, implying a feedback mechanism in N400 generation. In combination with the also observed strong feedforward effect from left STG/MTG to IFG, our findings provide evidence of dynamic feedback and feedforward influences between IFG and temporal areas during N400 generation.
Collapse
Affiliation(s)
- Fahimeh Mamashli
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Sheraz Khan
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Elaheh Hatamimajoumerd
- Department of Electrical and Computer Engineering, Northeastern University, Boston, MA 02115
| | - Mainak Jas
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Işıl Uluç
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Kaisu Lankinen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck 23562, Germany
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Burkhard Maess
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Jyrki Ahveninen
- Department of Radiology, Massachusetts General Hospital, Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA 02129
| |
Collapse
|
6
|
Wang K, Fang Y, Guo Q, Shen L, Chen Q. Superior Attentional Efficiency of Auditory Cue via the Ventral Auditory-thalamic Pathway. J Cogn Neurosci 2024; 36:303-326. [PMID: 38010315 DOI: 10.1162/jocn_a_02090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
Auditory commands are often executed more efficiently than visual commands. However, empirical evidence on the underlying behavioral and neural mechanisms remains scarce. In two experiments, we manipulated the delivery modality of informative cues and the prediction violation effect and found consistently enhanced RT benefits for the matched auditory cues compared with the matched visual cues. At the neural level, when the bottom-up perceptual input matched the prior prediction induced by the auditory cue, the auditory-thalamic pathway was significantly activated. Moreover, the stronger the auditory-thalamic connectivity, the higher the behavioral benefits of the matched auditory cue. When the bottom-up input violated the prior prediction induced by the auditory cue, the ventral auditory pathway was specifically involved. Moreover, the stronger the ventral auditory-prefrontal connectivity, the larger the behavioral costs caused by the violation of the auditory cue. In addition, the dorsal frontoparietal network showed a supramodal function in reacting to the violation of informative cues irrespective of the delivery modality of the cue. Taken together, the results reveal novel behavioral and neural evidence that the superior efficiency of the auditory cue is twofold: The auditory-thalamic pathway is associated with improvements in task performance when the bottom-up input matches the auditory cue, whereas the ventral auditory-prefrontal pathway is involved when the auditory cue is violated.
Collapse
Affiliation(s)
- Ke Wang
- South China Normal University, Guangzhou, China
| | - Ying Fang
- South China Normal University, Guangzhou, China
| | - Qiang Guo
- Guangdong Sanjiu Brain Hospital, Guangzhou, China
| | - Lu Shen
- South China Normal University, Guangzhou, China
| | - Qi Chen
- South China Normal University, Guangzhou, China
| |
Collapse
|
7
|
Belder CRS, Marshall CR, Jiang J, Mazzeo S, Chokesuwattanaskul A, Rohrer JD, Volkmer A, Hardy CJD, Warren JD. Primary progressive aphasia: six questions in search of an answer. J Neurol 2024; 271:1028-1046. [PMID: 37906327 PMCID: PMC10827918 DOI: 10.1007/s00415-023-12030-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2023] [Accepted: 09/27/2023] [Indexed: 11/02/2023]
Abstract
Here, we review recent progress in the diagnosis and management of primary progressive aphasia-the language-led dementias. We pose six key unanswered questions that challenge current assumptions and highlight the unresolved difficulties that surround these diseases. How many syndromes of primary progressive aphasia are there-and is syndromic diagnosis even useful? Are these truly 'language-led' dementias? How can we diagnose (and track) primary progressive aphasia better? Can brain pathology be predicted in these diseases? What is their core pathophysiology? In addition, how can primary progressive aphasia best be treated? We propose that pathophysiological mechanisms linking proteinopathies to phenotypes may help resolve the clinical complexity of primary progressive aphasia, and may suggest novel diagnostic tools and markers and guide the deployment of effective therapies.
Collapse
Affiliation(s)
- Christopher R S Belder
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
- UK Dementia Research Institute at UCL, UCL Queen Square Institute of Neurology, University College London, London, UK
- Adelaide Medical School, The University of Adelaide, Adelaide, South Australia, Australia
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Population Health, Queen Mary University of London, London, UK
| | - Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Salvatore Mazzeo
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
- Department of Neuroscience, Psychology, Drug Research and Child Health, University of Florence, Azienda Ospedaliera-Universitaria Careggi, Florence, Italy
| | - Anthipa Chokesuwattanaskul
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
- Division of Neurology, Department of Internal Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok, Thailand
- Cognitive Clinical and Computational Neuroscience Research Unit, Faculty of Medicine, Chulalongkorn University, Bangkok, Thailand
| | - Jonathan D Rohrer
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Anna Volkmer
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK.
| |
Collapse
|
8
|
Tsunada J, Eliades SJ. Frontal-Auditory Cortical Interactions and Sensory Prediction During Vocal Production in Marmoset Monkeys. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.28.577656. [PMID: 38352422 PMCID: PMC10862695 DOI: 10.1101/2024.01.28.577656] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 02/21/2024]
Abstract
The control of speech and vocal production involves the calculation of error between the intended vocal output and the resulting auditory feedback. Consistent with this model, recent evidence has demonstrated that the auditory cortex is suppressed immediately before and during vocal production, yet is still sensitive to differences between vocal output and altered auditory feedback. This suppression has been suggested to be the result of top-down signals containing information about the intended vocal output, potentially originating from motor or other frontal cortical areas. However, whether such frontal areas are the source of suppressive and predictive signaling to the auditory cortex during vocalization is unknown. Here, we simultaneously recorded neural activity from both the auditory and frontal cortices of marmoset monkeys while they produced self-initiated vocalizations. We found increases in neural activity in both brain areas preceding the onset of vocal production, notably changes in both multi-unit activity and local field potential theta-band power. Connectivity analysis using Granger causality demonstrated that frontal cortex sends directed signaling to the auditory cortex during this pre-vocal period. Importantly, this pre-vocal activity predicted both vocalization-induced suppression of the auditory cortex as well as the acoustics of subsequent vocalizations. These results suggest that frontal cortical areas communicate with the auditory cortex preceding vocal production, with frontal-auditory signals that may reflect the transmission of sensory prediction information. This interaction between frontal and auditory cortices may contribute to mechanisms that calculate errors between intended and actual vocal outputs during vocal communication.
Collapse
Affiliation(s)
- Joji Tsunada
- Chinese Institute for Brain Research, Beijing, China
- Department of Veterinary Medicine, Faculty of Agriculture, Iwate University, Morioka, Iwate, Japan
| | - Steven J. Eliades
- Department of Head and Neck Surgery & Communication Sciences, Duke University School of Medicine, Durham, NC 27710, USA
| |
Collapse
|
9
|
Magnuson JS, Crinnion AM, Luthra S, Gaston P, Grubb S. Contra assertions, feedback improves word recognition: How feedback and lateral inhibition sharpen signals over noise. Cognition 2024; 242:105661. [PMID: 37944313 PMCID: PMC11238470 DOI: 10.1016/j.cognition.2023.105661] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2022] [Revised: 10/17/2023] [Accepted: 11/02/2023] [Indexed: 11/12/2023]
Abstract
Whether top-down feedback modulates perception has deep implications for cognitive theories. Debate has been vigorous in the domain of spoken word recognition, where competing computational models and agreement on at least one diagnostic experimental paradigm suggest that the debate may eventually be resolvable. Norris and Cutler (2021) revisit arguments against lexical feedback in spoken word recognition models. They also incorrectly claim that recent computational demonstrations that feedback promotes accuracy and speed under noise (Magnuson et al., 2018) were due to the use of the Luce choice rule rather than adding noise to inputs (noise was in fact added directly to inputs). They also claim that feedback cannot improve word recognition because feedback cannot distinguish signal from noise. We have two goals in this paper. First, we correct the record about the simulations of Magnuson et al. (2018). Second, we explain how interactive activation models selectively sharpen signals via joint effects of feedback and lateral inhibition that boost lexically-coherent sublexical patterns over noise. We also review a growing body of behavioral and neural results consistent with feedback and inconsistent with autonomous (non-feedback) architectures, and conclude that parsimony supports feedback. We close by discussing the potential for synergy between autonomous and interactive approaches.
Collapse
Affiliation(s)
- James S Magnuson
- University of Connecticut. Storrs, CT, USA; BCBL. Basque Center on Cognition Brain and Language, Donostia-San Sebastián, Spain; Ikerbasque. Basque Foundation for Science, Bilbao, Spain.
| | | | | | | | | |
Collapse
|
10
|
Yi C, Liu C, Zhang J, Zhang X, Jiang L, Si Y, He G, Ao M, Zhao Y, Yao D, Li F, Ma X, Xu P, He B. The long-term effect of modulated acoustic stimulation on alteration in EEG brain network of chronic tinnitus patients: An exploratory study. Brain Res Bull 2023; 205:110812. [PMID: 37951276 DOI: 10.1016/j.brainresbull.2023.110812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 11/04/2023] [Accepted: 11/09/2023] [Indexed: 11/13/2023]
Abstract
Acoustic stimulation is one of the most influential techniques for distressing tinnitus, while how it functions to reverse neural changes associated with tinnitus remains undisclosed. In this study, our objective is to investigate alterations in brain networks to shed light on the enigma of acoustic intervention for tinnitus. We designed a 75-day long-term acoustic intervention experiment, during which chronic tinnitus patients received daily modulated acoustic stimulation with each session lasting 15 days. Every 15 days, professional tinnitus assessments were conducted, collecting both electroencephalogram (EEG) and tinnitus handicap inventory (THI) data from the patients. Thereafter, we investigated the changes in EEG network organizations during continuous acoustic stimulation and their progressive evolution throughout long-term therapy, alongside exploring the associations between the evolving changes of the network alterations and THI. Our current study findings reveal reorganization in alpha/beta long-range frontal-parietal-occipital connections as well as local frontal and parietal-occipital regions induced by acoustic stimulation. Furthermore, we observed a decrease in modulation effects as therapy sessions progressed. These alterations in brain networks reflect the reversal of tinnitus-related neural activities, particularly distress and perception; thus contributing to tinnitus rehabilitation through long-term modulation effects. This study provides unique insights into how long-term acoustic intervention affects the network organizations of tinnitus patients and deepens our understanding of the pathophysiological mechanisms underlying tinnitus rehabilitation.
Collapse
Affiliation(s)
- Chanlin Yi
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Chen Liu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Jiamin Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Xiabing Zhang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Lin Jiang
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yajing Si
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Psychology, Xinxiang Medical University, Xinxiang 453003, China
| | - Gang He
- Otolaryngology Department of Sichuan Provincial People's Hospital, Chengdu 610072, China
| | - Min Ao
- Otolaryngology Department of Sichuan Provincial People's Hospital, Chengdu 610072, China
| | - Yong Zhao
- Betterlife Medical Chengdu Co., Ltd, Chengdu 610000, China
| | - Dezhong Yao
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Psychology, Xinxiang Medical University, Xinxiang 453003, China; School of Electrical Engineering, Zhengzhou University, Zhengzhou 450001, China
| | - Fali Li
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China; Research Unit of NeuroInformation, Chinese Academy of Medical Sciences, 2019RU035 Chengdu, China; Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Xuntai Ma
- Clinical Medical College of Chengdu Medical College, Chengdu 610500, China; The First Affiliated Hospital of Chengdu Medical College, Chengdu 610599, China.
| | - Peng Xu
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for NeuroInformation, University of Electronic Science and Technology of China, Chengdu 611731, China; School of Life Science and Technology, Center for Information in Medicine, University of Electronic Science and Technology of China, Chengdu 611731, China; Radiation Oncology Key Laboratory of Sichuan Province, Chengdu 610041, China; Rehabilitation Center, Qilu Hospital of Shandong University, Jinan 250012, China.
| | - Baoming He
- Department of Neurology, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 610072, China; Chinese Academy of Sciences Sichuan Translational Medicine Research Hospital, Chengdu 610072, China.
| |
Collapse
|
11
|
Schroën JAM, Gunter TC, Numssen O, Kroczek LOH, Hartwigsen G, Friederici AD. Causal evidence for a coordinated temporal interplay within the language network. Proc Natl Acad Sci U S A 2023; 120:e2306279120. [PMID: 37963247 PMCID: PMC10666120 DOI: 10.1073/pnas.2306279120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 10/06/2023] [Indexed: 11/16/2023] Open
Abstract
Recent neurobiological models on language suggest that auditory sentence comprehension is supported by a coordinated temporal interplay within a left-dominant brain network, including the posterior inferior frontal gyrus (pIFG), posterior superior temporal gyrus and sulcus (pSTG/STS), and angular gyrus (AG). Here, we probed the timing and causal relevance of the interplay between these regions by means of concurrent transcranial magnetic stimulation and electroencephalography (TMS-EEG). Our TMS-EEG experiments reveal region- and time-specific causal evidence for a bidirectional information flow from left pSTG/STS to left pIFG and back during auditory sentence processing. Adapting a condition-and-perturb approach, our findings further suggest that the left pSTG/STS can be supported by the left AG in a state-dependent manner.
Collapse
Affiliation(s)
- Joëlle A. M. Schroën
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
| | - Thomas C. Gunter
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
| | - Ole Numssen
- Methods and Development Group Brain Networks, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
| | - Leon O. H. Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, Universität Regensburg, Regensburg93053, Germany
| | - Gesa Hartwigsen
- Lise Meitner Research Group Cognition and Plasticity, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
- Cognitive and Biological Psychology, Wilhelm Wundt Institute for Psychology, Leipzig04109, Germany
| | - Angela D. Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig04103, Germany
| |
Collapse
|
12
|
Hovsepyan S, Olasagasti I, Giraud AL. Rhythmic modulation of prediction errors: A top-down gating role for the beta-range in speech processing. PLoS Comput Biol 2023; 19:e1011595. [PMID: 37934766 PMCID: PMC10655987 DOI: 10.1371/journal.pcbi.1011595] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 11/17/2023] [Accepted: 10/11/2023] [Indexed: 11/09/2023] Open
Abstract
Natural speech perception requires processing the ongoing acoustic input while keeping in mind the preceding one and predicting the next. This complex computational problem could be handled by a dynamic multi-timescale hierarchical inferential process that coordinates the information flow up and down the language network hierarchy. Using a predictive coding computational model (Precoss-β) that identifies online individual syllables from continuous speech, we address the advantage of a rhythmic modulation of up and down information flows, and whether beta oscillations could be optimal for this. In the model, and consistent with experimental data, theta and low-gamma neural frequency scales ensure syllable-tracking and phoneme-level speech encoding, respectively, while the beta rhythm is associated with inferential processes. We show that a rhythmic alternation of bottom-up and top-down processing regimes improves syllable recognition, and that optimal efficacy is reached when the alternation of bottom-up and top-down regimes, via oscillating prediction error precisions, is in the beta range (around 20-30 Hz). These results not only demonstrate the advantage of a rhythmic alternation of up- and down-going information, but also that the low-beta range is optimal given sensory analysis at theta and low-gamma scales. While specific to speech processing, the notion of alternating bottom-up and top-down processes with frequency multiplexing might generalize to other cognitive architectures.
Collapse
Affiliation(s)
- Sevada Hovsepyan
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Itsaso Olasagasti
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
| | - Anne-Lise Giraud
- Department of Basic Neurosciences, University of Geneva, Biotech Campus, Genève, Switzerland
- Institut Pasteur, Université Paris Cité, Inserm, Institut de l’Audition, France
| |
Collapse
|
13
|
Kocsis Z, Jenison RL, Taylor PN, Calmus RM, McMurray B, Rhone AE, Sarrett ME, Deifelt Streese C, Kikuchi Y, Gander PE, Berger JI, Kovach CK, Choi I, Greenlee JD, Kawasaki H, Cope TE, Griffiths TD, Howard MA, Petkov CI. Immediate neural impact and incomplete compensation after semantic hub disconnection. Nat Commun 2023; 14:6264. [PMID: 37805497 PMCID: PMC10560235 DOI: 10.1038/s41467-023-42088-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 09/28/2023] [Indexed: 10/09/2023] Open
Abstract
The human brain extracts meaning using an extensive neural system for semantic knowledge. Whether broadly distributed systems depend on or can compensate after losing a highly interconnected hub is controversial. We report intracranial recordings from two patients during a speech prediction task, obtained minutes before and after neurosurgical treatment requiring disconnection of the left anterior temporal lobe (ATL), a candidate semantic knowledge hub. Informed by modern diaschisis and predictive coding frameworks, we tested hypotheses ranging from solely neural network disruption to complete compensation by the indirectly affected language-related and speech-processing sites. Immediately after ATL disconnection, we observed neurophysiological alterations in the recorded frontal and auditory sites, providing direct evidence for the importance of the ATL as a semantic hub. We also obtained evidence for rapid, albeit incomplete, attempts at neural network compensation, with neural impact largely in the forms stipulated by the predictive coding framework, in specificity, and the modern diaschisis framework, more generally. The overall results validate these frameworks and reveal an immediate impact and capability of the human brain to adjust after losing a brain hub.
Collapse
Affiliation(s)
- Zsuzsanna Kocsis
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, USA.
| | - Rick L Jenison
- Departments of Neuroscience and Psychology, University of Wisconsin, Madison, WI, USA
| | - Peter N Taylor
- CNNP Lab, Interdisciplinary Computing and Complex BioSystems Group, School of Computing, Newcastle University, Newcastle upon Tyne, UK
- UCL Institute of Neurology, Queen Square, London, UK
| | - Ryan M Calmus
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Bob McMurray
- Department of Psychological and Brain Science, University of Iowa, Iowa City, IA, USA
| | - Ariane E Rhone
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | | | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
- Department of Radiology, University of Iowa, Iowa City, IA, USA
- Iowa Neuroscience Institute, University of Iowa, Iowa City, IA, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | | | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Thomas E Cope
- Department of Clinical Neurosciences, Cambridge University, Cambridge, UK
- MRC Cognition and Brain Sciences Unit, Cambridge University, Cambridge, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Matthew A Howard
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Department of Neurosurgery, University of Iowa, Iowa City, IA, USA.
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
14
|
Jiang J, Johnson JCS, Requena-Komuro MC, Benhamou E, Sivasathiaseelan H, Chokesuwattanaskul A, Nelson A, Nortley R, Weil RS, Volkmer A, Marshall CR, Bamiou DE, Warren JD, Hardy CJD. Comprehension of acoustically degraded speech in Alzheimer's disease and primary progressive aphasia. Brain 2023; 146:4065-4076. [PMID: 37184986 PMCID: PMC10545509 DOI: 10.1093/brain/awad163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/20/2023] [Accepted: 04/27/2023] [Indexed: 05/17/2023] Open
Abstract
Successful communication in daily life depends on accurate decoding of speech signals that are acoustically degraded by challenging listening conditions. This process presents the brain with a demanding computational task that is vulnerable to neurodegenerative pathologies. However, despite recent intense interest in the link between hearing impairment and dementia, comprehension of acoustically degraded speech in these diseases has been little studied. Here we addressed this issue in a cohort of 19 patients with typical Alzheimer's disease and 30 patients representing the three canonical syndromes of primary progressive aphasia (non-fluent/agrammatic variant primary progressive aphasia; semantic variant primary progressive aphasia; logopenic variant primary progressive aphasia), compared to 25 healthy age-matched controls. As a paradigm for the acoustically degraded speech signals of daily life, we used noise-vocoding: synthetic division of the speech signal into frequency channels constituted from amplitude-modulated white noise, such that fewer channels convey less spectrotemporal detail thereby reducing intelligibility. We investigated the impact of noise-vocoding on recognition of spoken three-digit numbers and used psychometric modelling to ascertain the threshold number of noise-vocoding channels required for 50% intelligibility by each participant. Associations of noise-vocoded speech intelligibility threshold with general demographic, clinical and neuropsychological characteristics and regional grey matter volume (defined by voxel-based morphometry of patients' brain images) were also assessed. Mean noise-vocoded speech intelligibility threshold was significantly higher in all patient groups than healthy controls, and significantly higher in Alzheimer's disease and logopenic variant primary progressive aphasia than semantic variant primary progressive aphasia (all P < 0.05). In a receiver operating characteristic analysis, vocoded intelligibility threshold discriminated Alzheimer's disease, non-fluent variant and logopenic variant primary progressive aphasia patients very well from healthy controls. Further, this central hearing measure correlated with overall disease severity but not with peripheral hearing or clear speech perception. Neuroanatomically, after correcting for multiple voxel-wise comparisons in predefined regions of interest, impaired noise-vocoded speech comprehension across syndromes was significantly associated (P < 0.05) with atrophy of left planum temporale, angular gyrus and anterior cingulate gyrus: a cortical network that has previously been widely implicated in processing degraded speech signals. Our findings suggest that the comprehension of acoustically altered speech captures an auditory brain process relevant to daily hearing and communication in major dementia syndromes, with novel diagnostic and therapeutic implications.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Kidney Cancer Program, UT Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anthipa Chokesuwattanaskul
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Division of Neurology, Department of Internal Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok 10330, Thailand
| | - Annabel Nelson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Ross Nortley
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Wexham Park Hospital, Frimley Health NHS Foundation Trust, Slough SL2 4HL, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Population Health, Queen Mary University of London, London EC1M 6BQ, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute of Health Research, University College London, London WC1X 8EE, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
15
|
Wang Y, Jiang M, Zhu Y, Xue L, Shu W, Li X, Chen H, Li Y, Chen Y, Chai Y, Zhang Y, Chu Y, Song Y, Tao X, Wang Z, Wu H. Impact of inner ear malformation and cochlear nerve deficiency on the development of auditory-language network in children with profound sensorineural hearing loss. eLife 2023; 12:e85983. [PMID: 37697742 PMCID: PMC10497283 DOI: 10.7554/elife.85983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Accepted: 08/09/2023] [Indexed: 09/13/2023] Open
Abstract
Profound congenital sensorineural hearing loss (SNHL) prevents children from developing spoken language. Cochlear implantation and auditory brainstem implantation can provide partial hearing sensation, but language development outcomes can vary, particularly for patients with inner ear malformations and/or cochlear nerve deficiency (IEM&CND). Currently, the peripheral auditory structure is evaluated through visual inspection of clinical imaging, but this method is insufficient for surgical planning and prognosis. The central auditory pathway is also challenging to examine in vivo due to its delicate subcortical structures. Previous attempts to locate subcortical auditory nuclei using fMRI responses to sounds are not applicable to patients with profound hearing loss as no auditory brainstem responses can be detected in these individuals, making it impossible to capture corresponding blood oxygen signals in fMRI. In this study, we developed a new pipeline for mapping the auditory pathway using structural and diffusional MRI. We used a fixel-based approach to investigate the structural development of the auditory-language network for profound SNHL children with normal peripheral structure and those with IEM&CND under 6 years old. Our findings indicate that the language pathway is more sensitive to peripheral auditory condition than the central auditory pathway, highlighting the importance of early intervention for profound SNHL children to provide timely speech inputs. We also propose a comprehensive pre-surgical evaluation extending from the cochlea to the auditory-language network, showing significant correlations between age, gender, Cn.VIII median contrast value, and the language network with post-implant qualitative outcomes.
Collapse
Affiliation(s)
- Yaoxuan Wang
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Mengda Jiang
- Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
| | - Yuting Zhu
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Lu Xue
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Wenying Shu
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Xiang Li
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Hongsai Chen
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Yun Li
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Ying Chen
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Yongchuan Chai
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Yu Zhang
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Yinghua Chu
- MR Collaboration, Siemens Healthineers LtdShanghaiChina
| | - Yang Song
- MR Scientific Marketing, Siemens Healthineers LtdShanghaiChina
| | - Xiaofeng Tao
- Department of Radiology, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
| | - Zhaoyan Wang
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| | - Hao Wu
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghaiChina
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose diseasesShanghaiChina
| |
Collapse
|
16
|
Brændholt M, Kluger DS, Varga S, Heck DH, Gross J, Allen MG. Breathing in waves: Understanding respiratory-brain coupling as a gradient of predictive oscillations. Neurosci Biobehav Rev 2023; 152:105262. [PMID: 37271298 DOI: 10.1016/j.neubiorev.2023.105262] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 05/03/2023] [Accepted: 05/24/2023] [Indexed: 06/06/2023]
Abstract
Breathing plays a crucial role in shaping perceptual and cognitive processes by regulating the strength and synchronisation of neural oscillations. Numerous studies have demonstrated that respiratory rhythms govern a wide range of behavioural effects across cognitive, affective, and perceptual domains. Additionally, respiratory-modulated brain oscillations have been observed in various mammalian models and across diverse frequency spectra. However, a comprehensive framework to elucidate these disparate phenomena remains elusive. In this review, we synthesise existing findings to propose a neural gradient of respiratory-modulated brain oscillations and examine recent computational models of neural oscillations to map this gradient onto a hierarchical cascade of precision-weighted prediction errors. By deciphering the computational mechanisms underlying respiratory control of these processes, we can potentially uncover new pathways for understanding the link between respiratory-brain coupling and psychiatric disorders.
Collapse
Affiliation(s)
- Malthe Brændholt
- Center of Functionally Integrative Neuroscience, Aarhus University, Denmark
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Germany.
| | - Somogy Varga
- School of Culture and Society, Aarhus University, Denmark; The Centre for Philosophy of Epidemiology, Medicine and Public Health, University of Johannesburg, South Africa
| | - Detlef H Heck
- Department of Biomedical Sciences, University of Minnesota Medical School, Duluth, MN
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, Germany
| | - Micah G Allen
- Center of Functionally Integrative Neuroscience, Aarhus University, Denmark; Cambridge Psychiatry, University of Cambridge, UK
| |
Collapse
|
17
|
Daikoku T, Kamermans K, Minatoya M. Exploring cognitive individuality and the underlying creativity in statistical learning and phase entrainment. EXCLI JOURNAL 2023; 22:828-846. [PMID: 37720236 PMCID: PMC10502202 DOI: 10.17179/excli2023-6135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 08/02/2023] [Indexed: 09/19/2023]
Abstract
Statistical learning starts at an early age and is intimately linked to brain development and the emergence of individuality. Through such a long period of statistical learning, the brain updates and constructs statistical models, with the model's individuality changing based on the type and degree of stimulation received. However, the detailed mechanisms underlying this process are unknown. This paper argues three main points of statistical learning, including 1) cognitive individuality based on "reliability" of prediction, 2) the construction of information "hierarchy" through chunking, and 3) the acquisition of "1-3Hz rhythm" that is essential for early language and music learning. We developed a Hierarchical Bayesian Statistical Learning (HBSL) model that takes into account both reliability and hierarchy, mimicking the statistical learning processes of the brain. Using this model, we conducted a simulation experiment to visualize the temporal dynamics of perception and production processes through statistical learning. By modulating the sensitivity to sound stimuli, we simulated three cognitive models with different reliability on bottom-up sensory stimuli relative to top-down prior prediction: hypo-sensitive, normal-sensitive, and hyper-sensitive models. We suggested that statistical learning plays a crucial role in the acquisition of 1-3 Hz rhythm. Moreover, a hyper-sensitive model quickly learned the sensory statistics but became fixated on their internal model, making it difficult to generate new information, whereas a hypo-sensitive model has lower learning efficiency but may be more likely to generate new information. Various individual characteristics may not necessarily confer an overall advantage over others, as there may be a trade-off between learning efficiency and the ease of generating new information. This study has the potential to shed light on the heterogeneous nature of statistical learning, as well as the paradoxical phenomenon in which individuals with certain cognitive traits that impede specific types of perceptual abilities exhibit superior performance in creative contexts.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
- Centre for Neuroscience in Education, University of Cambridge, Cambridge, UK
- Center for Brain, Mind and KANSEI Sciences Research, Hiroshima University, Hiroshima, Japan
| | - Kevin Kamermans
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| | - Maiko Minatoya
- Graduate School of Information Science and Technology, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
18
|
Abbasi O, Steingräber N, Chalas N, Kluger DS, Gross J. Spatiotemporal dynamics characterise spectral connectivity profiles of continuous speaking and listening. PLoS Biol 2023; 21:e3002178. [PMID: 37478152 DOI: 10.1371/journal.pbio.3002178] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 05/31/2023] [Indexed: 07/23/2023] Open
Abstract
Speech production and perception are fundamental processes of human cognition that both rely on intricate processing mechanisms that are still poorly understood. Here, we study these processes by using magnetoencephalography (MEG) to comprehensively map connectivity of regional brain activity within the brain and to the speech envelope during continuous speaking and listening. Our results reveal not only a partly shared neural substrate for both processes but also a dissociation in space, delay, and frequency. Neural activity in motor and frontal areas is coupled to succeeding speech in delta band (1 to 3 Hz), whereas coupling in the theta range follows speech in temporal areas during speaking. Neural connectivity results showed a separation of bottom-up and top-down signalling in distinct frequency bands during speaking. Here, we show that frequency-specific connectivity channels for bottom-up and top-down signalling support continuous speaking and listening. These findings further shed light on the complex interplay between different brain regions involved in speech production and perception.
Collapse
Affiliation(s)
- Omid Abbasi
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nadine Steingräber
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
| | - Nikos Chalas
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Daniel S Kluger
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| | - Joachim Gross
- Institute for Biomagnetism and Biosignal Analysis, University of Münster, Münster, Germany
- Otto-Creutzfeldt-Center for Cognitive and Behavioral Neuroscience, University of Münster, Münster, Germany
| |
Collapse
|
19
|
Viswanathan V, Bharadwaj HM, Heinz MG, Shinn-Cunningham BG. Induced alpha and beta electroencephalographic rhythms covary with single-trial speech intelligibility in competition. Sci Rep 2023; 13:10216. [PMID: 37353552 PMCID: PMC10290148 DOI: 10.1038/s41598-023-37173-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/17/2023] [Indexed: 06/25/2023] Open
Abstract
Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Hari M Bharadwaj
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Michael G Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, 47907, USA
| | | |
Collapse
|
20
|
Cope TE, Sohoglu E, Peterson KA, Jones PS, Rua C, Passamonti L, Sedley W, Post B, Coebergh J, Butler CR, Garrard P, Abdel-Aziz K, Husain M, Griffiths TD, Patterson K, Davis MH, Rowe JB. Temporal lobe perceptual predictions for speech are instantiated in motor cortex and reconciled by inferior frontal cortex. Cell Rep 2023; 42:112422. [PMID: 37099422 DOI: 10.1016/j.celrep.2023.112422] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 12/23/2022] [Accepted: 04/05/2023] [Indexed: 04/27/2023] Open
Abstract
Humans use predictions to improve speech perception, especially in noisy environments. Here we use 7-T functional MRI (fMRI) to decode brain representations of written phonological predictions and degraded speech signals in healthy humans and people with selective frontal neurodegeneration (non-fluent variant primary progressive aphasia [nfvPPA]). Multivariate analyses of item-specific patterns of neural activation indicate dissimilar representations of verified and violated predictions in left inferior frontal gyrus, suggestive of processing by distinct neural populations. In contrast, precentral gyrus represents a combination of phonological information and weighted prediction error. In the presence of intact temporal cortex, frontal neurodegeneration results in inflexible predictions. This manifests neurally as a failure to suppress incorrect predictions in anterior superior temporal gyrus and reduced stability of phonological representations in precentral gyrus. We propose a tripartite speech perception network in which inferior frontal gyrus supports prediction reconciliation in echoic memory, and precentral gyrus invokes a motor model to instantiate and refine perceptual predictions for speech.
Collapse
Affiliation(s)
- Thomas E Cope
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK; Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK; Cambridge University Hospitals NHS Trust, Cambridge CB2 0QQ, UK.
| | - Ediz Sohoglu
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK; School of Psychology, University of Sussex, Brighton BN1 9RH, UK
| | - Katie A Peterson
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK; Department of Radiology, University of Cambridge, Cambridge CB2 0QQ, UK
| | - P Simon Jones
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK
| | - Catarina Rua
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK
| | - Luca Passamonti
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK
| | - William Sedley
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| | - Brechtje Post
- Theoretical and Applied Linguistics, Faculty of Modern & Medieval Languages & Linguistics, University of Cambridge, Cambridge CB3 9DA, UK
| | - Jan Coebergh
- Ashford and St Peter's Hospital, Ashford TW15 3AA, UK; St George's Hospital, London SW17 0QT, UK
| | - Christopher R Butler
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, UK; Faculty of Medicine, Department of Brain Sciences, Imperial College London, London W12 0NN, UK
| | - Peter Garrard
- St George's Hospital, London SW17 0QT, UK; Molecular and Clinical Sciences Research Institute, St. George's, University of London, London SW17 0RE, UK
| | - Khaled Abdel-Aziz
- Ashford and St Peter's Hospital, Ashford TW15 3AA, UK; St George's Hospital, London SW17 0QT, UK
| | - Masud Husain
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford OX3 9DU, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| | - Karalyn Patterson
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK; Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
| | - Matthew H Davis
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK
| | - James B Rowe
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, UK; Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge CB2 7EF, UK; Cambridge University Hospitals NHS Trust, Cambridge CB2 0QQ, UK
| |
Collapse
|
21
|
Voola M, Wedekind A, Nguyen AT, Marinovic W, Rajan G, Tavora-Vieira D. Event-Related Potentials of Single-Sided Deaf Cochlear Implant Users: Using a Semantic Oddball Paradigm in Noise. Audiol Neurootol 2023; 28:280-293. [PMID: 36940674 PMCID: PMC10413801 DOI: 10.1159/000529485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Accepted: 01/31/2023] [Indexed: 03/23/2023] Open
Abstract
INTRODUCTION In individuals with single-sided deafness (SSD), who are characterised by profound hearing loss in one ear and normal hearing in the contralateral ear, binaural input is no longer present. A cochlear implant (CI) can restore functional hearing in the profoundly deaf ear, with previous literature demonstrating improvements in speech-in-noise intelligibility with the CI. However, we currently have limited understanding of the neural processes involved (e.g., how the brain integrates the electrical signal produced by the CI with the acoustic signal produced by the normal hearing ear) and how modulation of these processes with a CI contributes to improved speech-in-noise intelligibility. Using a semantic oddball paradigm presented in the presence of background noise, this study aims to investigate how the provision of CI impacts speech-in-noise perception of SSD-CI users. METHOD Task performance (reaction time, reaction time variability, target accuracy, subjective listening effort) and high density electroencephalography from twelve SSD-CI participants were recorded, while they completed a semantic acoustic oddball task. Reaction time was defined as the time taken for a participant to press the response button after stimulus onset. All participants completed the oddball task in three different free-field conditions with the speech and noise coming from different speakers. The three tasks were: (1) CI-On in background noise, (2) CI-Off in background noise, and (3) CI-On without background noise (Control). Task performance and electroencephalography data (N2N4 and P3b) were recorded for each condition. Speech in noise and sound localisation ability were also measured. RESULTS Reaction time was significantly different between all tasks with CI-On (M [SE] = 809 [39.9] ms) having faster RTs than CI-Off (M [SE] = 845 [39.9] ms) and Control (M [SE] = 785 [39.9] ms) being the fastest condition. The Control condition exhibited significantly shorter N2N4 and P3b area latency compared to the other two conditions. However, despite these differences noticed in RTs and area latency, we observed similar results between all three conditions for N2N4 and P3b difference area. CONCLUSION The inconsistency between the behavioural and neural results suggests that EEG may not be a reliable measure of cognitive effort. This rationale is further supported by different explanations used in past studies to explain N2N4 and P3b effects. Future studies should look to alternative measures of auditory processing (e.g., pupillometry) to gain a deeper understanding of the underlying auditory processes that facilitate speech-in-noise intelligibility.
Collapse
Affiliation(s)
- Marcus Voola
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
| | - Andre Wedekind
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
| | - An T. Nguyen
- School of Population Health, Curtin University, Perth, WA, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, WA, Australia
| | - Gunesh Rajan
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Deptartment of Otolaryngology, Head and Neck Surgery, Luzerner Kantonsspital, Luzern, Switzerland
| | - Dayse Tavora-Vieira
- Division of Surgery, Medical School, The University of Western Australia, Perth, WA, Australia
- Department of Audiology, Fiona Stanley Fremantle Hospitals Group, Perth, WA, Australia
- School of Population Health, Curtin University, Perth, WA, Australia
| |
Collapse
|
22
|
Wang Q, Zhao S, He Z, Zhang S, Jiang X, Zhang T, Liu T, Liu C, Han J. Modeling functional difference between gyri and sulci within intrinsic connectivity networks. Cereb Cortex 2023; 33:933-947. [PMID: 35332916 DOI: 10.1093/cercor/bhac111] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 02/17/2022] [Accepted: 02/18/2022] [Indexed: 11/12/2022] Open
Abstract
Recently, the functional roles of the human cortical folding patterns have attracted increasing interest in the neuroimaging community. However, most existing studies have focused on the gyro-sulcal functional relationship on a whole-brain scale but possibly overlooked the localized and subtle functional differences of brain networks. Actually, accumulating evidences suggest that functional brain networks are the basic unit to realize the brain function; thus, the functional relationships between gyri and sulci still need to be further explored within different functional brain networks. Inspired by these evidences, we proposed a novel intrinsic connectivity network (ICN)-guided pooling-trimmed convolutional neural network (I-ptFCN) to revisit the functional difference between gyri and sulci. By testing the proposed model on the task functional magnetic resonance imaging (fMRI) datasets of the Human Connectome Project, we found that the classification accuracy of gyral and sulcal fMRI signals varied significantly for different ICNs, indicating functional heterogeneity of cortical folding patterns in different brain networks. The heterogeneity may be contributed by sulci, as only sulcal signals show heterogeneous frequency features across different ICNs, whereas the frequency features of gyri are homogeneous. These results offer novel insights into the functional difference between gyri and sulci and enlighten the functional roles of cortical folding patterns.
Collapse
Affiliation(s)
- Qiyu Wang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Shijie Zhao
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Zhibin He
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Shu Zhang
- School of Computer Science, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Xi Jiang
- School of Life Science and Technology, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, Sichuan 611731, China
| | - Tuo Zhang
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA 30605, United States
| | - Cirong Liu
- CAS Center for Excellence in Brain Science and Intelligence Technology, Institute of Neuroscience, Chinese Academy of Sciences, Shanghai 200031, China
| | - Junwei Han
- School of Automation, Northwestern Polytechnical University, Xi'an, Shaanxi 710072, China
| |
Collapse
|
23
|
Zoefel B, Gilbert RA, Davis MH. Intelligibility improves perception of timing changes in speech. PLoS One 2023; 18:e0279024. [PMID: 36634109 PMCID: PMC9836318 DOI: 10.1371/journal.pone.0279024] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 11/28/2022] [Indexed: 01/13/2023] Open
Abstract
Auditory rhythms are ubiquitous in music, speech, and other everyday sounds. Yet, it is unclear how perceived rhythms arise from the repeating structure of sounds. For speech, it is unclear whether rhythm is solely derived from acoustic properties (e.g., rapid amplitude changes), or if it is also influenced by the linguistic units (syllables, words, etc.) that listeners extract from intelligible speech. Here, we present three experiments in which participants were asked to detect an irregularity in rhythmically spoken speech sequences. In each experiment, we reduce the number of possible stimulus properties that differ between intelligible and unintelligible speech sounds and show that these acoustically-matched intelligibility conditions nonetheless lead to differences in rhythm perception. In Experiment 1, we replicate a previous study showing that rhythm perception is improved for intelligible (16-channel vocoded) as compared to unintelligible (1-channel vocoded) speech-despite near-identical broadband amplitude modulations. In Experiment 2, we use spectrally-rotated 16-channel speech to show the effect of intelligibility cannot be explained by differences in spectral complexity. In Experiment 3, we compare rhythm perception for sine-wave speech signals when they are heard as non-speech (for naïve listeners), and subsequent to training, when identical sounds are perceived as speech. In all cases, detection of rhythmic regularity is enhanced when participants perceive the stimulus as speech compared to when they do not. Together, these findings demonstrate that intelligibility enhances the perception of timing changes in speech, which is hence linked to processes that extract abstract linguistic units from sound.
Collapse
Affiliation(s)
- Benedikt Zoefel
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
- Centre National de la Recherche Scientifique (CNRS), Centre de Recherche Cerveau et Cognition (CerCo), Toulouse, France
- Université de Toulouse III Paul Sabatier, Toulouse, France
| | - Rebecca A. Gilbert
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| | - Matthew H. Davis
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, United Kingdom
| |
Collapse
|
24
|
A Study of Event-Related Potentials During Monaural and Bilateral Hearing in Single-Sided Deaf Cochlear Implant Users. Ear Hear 2023:00003446-990000000-00102. [PMID: 36706105 DOI: 10.1097/aud.0000000000001326] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVES Single-sided deafness (SSD) is characterized by a profoundly deaf ear and normal hearing in the contralateral ear. A cochlear implant (CI) is the only method to restore functional hearing in a profoundly deaf ear. In a previous study, we identified that the cortical processing of a CI signal differs from the normal-hearing ear (NHE) when directly compared using an auditory oddball paradigm consisting of pure tones. However, exactly how the brain integrates the electrical and acoustic signal is not well investigated. This study aims to understand how the provision of the CI in combination with the NHE may improve SSD CI users' ability to discriminate and evaluate auditory stimuli. DESIGN Electroencephalography from 10 SSD-CI participants (4 participated in the previous pure-tone study) were recorded during a semantic acoustic oddball task, where they were required to discriminate between odd and even numbers. Stimuli were presented in four hearing conditions: directly through the CI, directly to the NHE, or in free field with the CI switched on and off. We examined task-performance (response time and accuracy) and measured N1, P2, N2N4, and P3b event-related brain potentials (ERPs) linked to the detection, discrimination, and evaluation of task relevant stimuli. Sound localization and speech in noise comprehension was also examined. RESULTS In direct presentation, task performance was superior during NHE compared with CI (shorter and less varied reaction times [~720 versus ~842 msec], higher target accuracy [~93 versus ~70%]) and early neural responses (N1 and P2) were enhanced for NHE suggesting greater signal saliency. However, the size of N2N4 and P3b target-standard effects did not differ significantly between NHE and CI. In free field, target accuracy was similarly high with the CI (FF-On) and without the CI (FF-Off) (~95%), with some evidence of CI interference during FF-On (more variable and slightly but significantly delayed reaction times [~737 versus ~709 msec]). Early neural responses and late effects were also greater during FF-On. Performance on sound localization and speech in noise comprehension (S CI N NHE configuration only) was significantly greater during FF-On. CONCLUSIONS Both behavioral and neural responses in the semantic oddball task were sensitive to CI in both direct and free-field presentations. Direct conditions revealed that participants could perform the task with the CI alone, although performance was suboptimal and early neural responses were reduced when compared with the NHE. For free-field, the addition of the CI was associated with enhanced early and late neural responses, but this did not result in improved task performance. Enhanced neural responses show that the additional input from the CI is modulating relevant perceptual and cognitive processes, but the benefit of binaural hearing on behavior may not be realized in simple oddball tasks which can be adequately performed with the NHE. Future studies interested in binaural hearing should examine performance under noisy conditions and/or use spatial cues to allow headroom for the measurement of binaural benefit.
Collapse
|
25
|
Pregla D, Vasishth S, Lissón P, Stadie N, Burchert F. Can the resource reduction hypothesis explain sentence processing in aphasia? A visual world study in German. BRAIN AND LANGUAGE 2022; 235:105204. [PMID: 36435153 DOI: 10.1016/j.bandl.2022.105204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 10/21/2022] [Accepted: 11/07/2022] [Indexed: 06/16/2023]
Abstract
Resource limitation has often been invoked as a key driver of sentence comprehension difficulty, in both theories of language-unimpaired and language-impaired populations. In the field of aphasia, one such influential theory is Caplan's resource reduction hypothesis (RRH). In this large investigation of online processing in aphasia in German, we evaluated three key predictions of the RRH in 21 individuals with aphasia and 22 control pparticipants. Measures of online processing were obtained by combining a sentence-picture matching task with the visual world paradigm. Four sentence types were used to investigate the generality of the findings, and two test phases were used to investigate RRH's predictions regarding variability in aphasia. The processing patterns were consistent with two of the three predictions of the RRH. Overall, our investigation shows that the RRH can account for important aspects of sentence processing in aphasia.
Collapse
|
26
|
Pastore A, Tomassini A, Delis I, Dolfini E, Fadiga L, D'Ausilio A. Speech listening entails neural encoding of invisible articulatory features. Neuroimage 2022; 264:119724. [PMID: 36328272 DOI: 10.1016/j.neuroimage.2022.119724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/28/2022] [Accepted: 10/30/2022] [Indexed: 11/06/2022] Open
Abstract
Speech processing entails a complex interplay between bottom-up and top-down computations. The former is reflected in the neural entrainment to the quasi-rhythmic properties of speech acoustics while the latter is supposed to guide the selection of the most relevant input subspace. Top-down signals are believed to originate mainly from motor regions, yet similar activities have been shown to tune attentional cycles also for simpler, non-speech stimuli. Here we examined whether, during speech listening, the brain reconstructs articulatory patterns associated to speech production. We measured electroencephalographic (EEG) data while participants listened to sentences during the production of which articulatory kinematics of lips, jaws and tongue were also recorded (via Electro-Magnetic Articulography, EMA). We captured the patterns of articulatory coordination through Principal Component Analysis (PCA) and used Partial Information Decomposition (PID) to identify whether the speech envelope and each of the kinematic components provided unique, synergistic and/or redundant information regarding the EEG signals. Interestingly, tongue movements contain both unique as well as synergistic information with the envelope that are encoded in the listener's brain activity. This demonstrates that during speech listening the brain retrieves highly specific and unique motor information that is never accessible through vision, thus leveraging audio-motor maps that arise most likely from the acquisition of speech production during development.
Collapse
Affiliation(s)
- A Pastore
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| | - A Tomassini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy
| | - I Delis
- School of Biomedical Sciences, University of Leeds, Leeds, UK
| | - E Dolfini
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - L Fadiga
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy
| | - A D'Ausilio
- Center for Translational Neurophysiology of Speech and Communication, Istituto Italiano di Tecnologia, Ferrara, Italy; Department of Neuroscience and Rehabilitation, Università di Ferrara, Ferrara, Italy.
| |
Collapse
|
27
|
Perry A, Hughes LE, Adams N, Naessens M, Murley AG, Rouse MA, Street D, Jones PS, Cope TE, Kocagoncu E, Rowe JB. The neurophysiological effect of NMDA-R antagonism of frontotemporal lobar degeneration is conditional on individual GABA concentration. Transl Psychiatry 2022; 12:348. [PMID: 36030249 PMCID: PMC9420128 DOI: 10.1038/s41398-022-02114-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/29/2022] [Revised: 08/09/2022] [Accepted: 08/11/2022] [Indexed: 02/02/2023] Open
Abstract
There is a pressing need to accelerate therapeutic strategies against the syndromes caused by frontotemporal lobar degeneration, including symptomatic treatments. One approach is for experimental medicine, coupling neurophysiological studies of the mechanisms of disease with pharmacological interventions aimed at restoring neurochemical deficits. Here we consider the role of glutamatergic deficits and their potential as targets for treatment. We performed a double-blind placebo-controlled crossover pharmaco-magnetoencephalography study in 20 people with symptomatic frontotemporal lobar degeneration (10 behavioural variant frontotemporal dementia, 10 progressive supranuclear palsy) and 19 healthy age- and gender-matched controls. Both magnetoencephalography sessions recorded a roving auditory oddball paradigm: on placebo or following 10 mg memantine, an uncompetitive NMDA-receptor antagonist. Ultra-high-field magnetic resonance spectroscopy confirmed lower concentrations of GABA in the right inferior frontal gyrus of people with frontotemporal lobar degeneration. While memantine showed a subtle effect on early-auditory processing in patients, there was no significant main effect of memantine on the magnitude of the mismatch negativity (MMN) response in the right frontotemporal cortex in patients or controls. However, the change in the right auditory cortex MMN response to memantine (vs. placebo) in patients correlated with individuals' prefrontal GABA concentration. There was no moderating effect of glutamate concentration or cortical atrophy. This proof-of-concept study demonstrates the potential for baseline dependency in the pharmacological restoration of neurotransmitter deficits to influence cognitive neurophysiology in neurodegenerative disease. With changes to multiple neurotransmitters in frontotemporal lobar degeneration, we suggest that individuals' balance of excitation and inhibition may determine drug efficacy, with implications for drug selection and patient stratification in future clinical trials.
Collapse
Affiliation(s)
- Alistair Perry
- MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF, UK. .,Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ, UK.
| | - Laura E. Hughes
- grid.5335.00000000121885934MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF UK ,grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Natalie Adams
- grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Michelle Naessens
- grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Alexander G. Murley
- grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Matthew A. Rouse
- grid.5335.00000000121885934MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF UK
| | - Duncan Street
- grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - P. Simon Jones
- grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Thomas E. Cope
- grid.5335.00000000121885934MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF UK ,grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - Ece Kocagoncu
- grid.5335.00000000121885934MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF UK ,grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| | - James B. Rowe
- grid.5335.00000000121885934MRC Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, CB2 7EF UK ,grid.5335.00000000121885934Department of Clinical Neurosciences and Cambridge University Hospitals NHS Trust, University of Cambridge, Cambridge, CB2 0QQ UK
| |
Collapse
|
28
|
Soleimani B, Das P, Dushyanthi Karunathilake IM, Kuchinsky SE, Simon JZ, Babadi B. NLGC: Network Localized Granger Causality with Application to MEG Directional Functional Connectivity Analysis. Neuroimage 2022; 260:119496. [PMID: 35870697 PMCID: PMC9435442 DOI: 10.1016/j.neuroimage.2022.119496] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 06/21/2022] [Accepted: 07/19/2022] [Indexed: 11/25/2022] Open
Abstract
Identifying the directed connectivity that underlie networked activity between different cortical areas is critical for understanding the neural mechanisms behind sensory processing. Granger causality (GC) is widely used for this purpose in functional magnetic resonance imaging analysis, but there the temporal resolution is low, making it difficult to capture the millisecond-scale interactions underlying sensory processing. Magnetoencephalography (MEG) has millisecond resolution, but only provides low-dimensional sensor-level linear mixtures of neural sources, which makes GC inference challenging. Conventional methods proceed in two stages: First, cortical sources are estimated from MEG using a source localization technique, followed by GC inference among the estimated sources. However, the spatiotemporal biases in estimating sources propagate into the subsequent GC analysis stage, may result in both false alarms and missing true GC links. Here, we introduce the Network Localized Granger Causality (NLGC) inference paradigm, which models the source dynamics as latent sparse multivariate autoregressive processes and estimates their parameters directly from the MEG measurements, integrated with source localization, and employs the resulting parameter estimates to produce a precise statistical characterization of the detected GC links. We offer several theoretical and algorithmic innovations within NLGC and further examine its utility via comprehensive simulations and application to MEG data from an auditory task involving tone processing from both younger and older participants. Our simulation studies reveal that NLGC is markedly robust with respect to model mismatch, network size, and low signal-to-noise ratio, whereas the conventional two-stage methods result in high false alarms and mis-detections. We also demonstrate the advantages of NLGC in revealing the cortical network-level characterization of neural activity during tone processing and resting state by delineating task- and age-related connectivity changes.
Collapse
Affiliation(s)
- Behrad Soleimani
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA; Institute for Systems Research, University of Maryland, College Park, MD, USA.
| | - Proloy Das
- Department of Anesthesia, Critical Care and Pain Medicine, Massachusetts General Hospital, Boston, MA, USA.
| | - I M Dushyanthi Karunathilake
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA; Institute for Systems Research, University of Maryland, College Park, MD, USA.
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD, USA.
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA; Institute for Systems Research, University of Maryland, College Park, MD, USA; Department of Biology, University of Maryland College Park, MD, USA.
| | - Behtash Babadi
- Department of Electrical and Computer Engineering, University of Maryland, College Park, MD, USA; Institute for Systems Research, University of Maryland, College Park, MD, USA.
| |
Collapse
|
29
|
Sherafati A, Dwyer N, Bajracharya A, Hassanpour MS, Eggebrecht AT, Firszt JB, Culver JP, Peelle JE. Prefrontal cortex supports speech perception in listeners with cochlear implants. eLife 2022; 11:e75323. [PMID: 35666138 PMCID: PMC9225001 DOI: 10.7554/elife.75323] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Accepted: 06/04/2022] [Indexed: 12/14/2022] Open
Abstract
Cochlear implants are neuroprosthetic devices that can restore hearing in people with severe to profound hearing loss by electrically stimulating the auditory nerve. Because of physical limitations on the precision of this stimulation, the acoustic information delivered by a cochlear implant does not convey the same level of acoustic detail as that conveyed by normal hearing. As a result, speech understanding in listeners with cochlear implants is typically poorer and more effortful than in listeners with normal hearing. The brain networks supporting speech understanding in listeners with cochlear implants are not well understood, partly due to difficulties obtaining functional neuroimaging data in this population. In the current study, we assessed the brain regions supporting spoken word understanding in adult listeners with right unilateral cochlear implants (n=20) and matched controls (n=18) using high-density diffuse optical tomography (HD-DOT), a quiet and non-invasive imaging modality with spatial resolution comparable to that of functional MRI. We found that while listening to spoken words in quiet, listeners with cochlear implants showed greater activity in the left prefrontal cortex than listeners with normal hearing, specifically in a region engaged in a separate spatial working memory task. These results suggest that listeners with cochlear implants require greater cognitive processing during speech understanding than listeners with normal hearing, supported by compensatory recruitment of the left prefrontal cortex.
Collapse
Affiliation(s)
- Arefeh Sherafati
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
| | - Noel Dwyer
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Aahana Bajracharya
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | | | - Adam T Eggebrecht
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Electrical & Systems Engineering, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
| | - Jill B Firszt
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| | - Joseph P Culver
- Department of Radiology, Washington University in St. LouisSt. LouisUnited States
- Department of Biomedical Engineering, Washington University in St. LouisSt. LouisUnited States
- Division of Biology and Biomedical Sciences, Washington University in St. LouisSt. LouisUnited States
- Department of Physics, Washington University in St. LouisSt. LouisUnited States
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. LouisSt. LouisUnited States
| |
Collapse
|
30
|
Bröhl F, Keitel A, Kayser C. MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading. eNeuro 2022; 9:ENEURO.0209-22.2022. [PMID: 35728955 PMCID: PMC9239847 DOI: 10.1523/eneuro.0209-22.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is an intrinsically multisensory signal, and seeing the speaker's lips forms a cornerstone of communication in acoustically impoverished environments. Still, it remains unclear how the brain exploits visual speech for comprehension. Previous work debated whether lip signals are mainly processed along the auditory pathways or whether the visual system directly implements speech-related processes. To probe this, we systematically characterized dynamic representations of multiple acoustic and visual speech-derived features in source localized MEG recordings that were obtained while participants listened to speech or viewed silent speech. Using a mutual-information framework we provide a comprehensive assessment of how well temporal and occipital cortices reflect the physically presented signals and unique aspects of acoustic features that were physically absent but may be critical for comprehension. Our results demonstrate that both cortices feature a functionally specific form of multisensory restoration: during lip reading, they reflect unheard acoustic features, independent of co-existing representations of the visible lip movements. This restoration emphasizes the unheard pitch signature in occipital cortex and the speech envelope in temporal cortex and is predictive of lip-reading performance. These findings suggest that when seeing the speaker's lips, the brain engages both visual and auditory pathways to support comprehension by exploiting multisensory correspondences between lip movements and spectro-temporal acoustic cues.
Collapse
Affiliation(s)
- Felix Bröhl
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld 33615, Germany
| | - Anne Keitel
- Psychology, University of Dundee, Dundee DD1 4HN, United Kingdom
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Bielefeld 33615, Germany
| |
Collapse
|
31
|
Disentangling Reversal-learning Impairments in Frontotemporal Dementia and Alzheimer Disease. Cogn Behav Neurol 2022; 35:110-122. [PMID: 35486540 DOI: 10.1097/wnn.0000000000000303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 09/09/2021] [Indexed: 11/26/2022]
Abstract
BACKGROUND Individuals with frontotemporal dementia (FTD) often present with poor decision-making, which can affect both their financial and social situations. Delineation of the specific cognitive impairments giving rise to impaired decision-making in individuals with FTD may inform treatment strategies, as different neurotransmitter systems have been associated with distinct patterns of altered decision-making. OBJECTIVE To use a reversal-learning paradigm to identify the specific cognitive components of reversal learning that are most impaired in individuals with FTD and those with Alzheimer disease (AD) in order to inform future approaches to treatment for symptoms related to poor decision-making and behavioral inflexibility. METHOD We gave 30 individuals with either the behavioral variant of FTD or AD and 18 healthy controls a stimulus-discrimination reversal-learning task to complete. We then compared performance in each phase between the groups. RESULTS The FTD group demonstrated impairments in initial stimulus-association learning, though to a lesser degree than the AD group. The FTD group also performed poorly in classic reversal learning, with the greatest impairments being observed in individuals with frontal-predominant atrophy during trials requiring inhibition of a previously advantageous response. CONCLUSION Taken together, these results and the reversal-learning paradigm used in this study may inform the development and screening of behavioral, neurostimulatory, or pharmacologic interventions aiming to address behavioral symptoms related to stimulus-reinforcement learning and response inhibition impairments in individuals with FTD.
Collapse
|
32
|
Cope TE, Hughes LE, Phillips HN, Adams NE, Jafarian A, Nesbitt D, Assem M, Woolgar A, Duncan J, Rowe JB. Causal Evidence for the Multiple Demand Network in Change Detection: Auditory Mismatch Magnetoencephalography across Focal Neurodegenerative Diseases. J Neurosci 2022; 42:3197-3215. [PMID: 35260433 PMCID: PMC8994545 DOI: 10.1523/jneurosci.1622-21.2022] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 01/04/2022] [Accepted: 01/05/2022] [Indexed: 02/02/2023] Open
Abstract
The multiple demand (MD) system is a network of fronto-parietal brain regions active during the organization and control of diverse cognitive operations. It has been argued that this activation may be a nonspecific signal of task difficulty. However, here we provide convergent evidence for a causal role for the MD network in the "simple task" of automatic auditory change detection, through the impairment of top-down control mechanisms. We employ independent structure-function mapping, dynamic causal modeling (DCM), and frequency-resolved functional connectivity analyses of MRI and magnetoencephalography (MEG) from 75 mixed-sex human patients across four neurodegenerative syndromes [behavioral variant fronto-temporal dementia (bvFTD), nonfluent variant primary progressive aphasia (nfvPPA), posterior cortical atrophy (PCA), and Alzheimer's disease mild cognitive impairment with positive amyloid imaging (ADMCI)] and 48 age-matched controls. We show that atrophy of any MD node is sufficient to impair auditory neurophysiological response to change in frequency, location, intensity, continuity, or duration. There was no similar association with atrophy of the cingulo-opercular, salience or language networks, or with global atrophy. MD regions displayed increased functional but decreased effective connectivity as a function of neurodegeneration, suggesting partially effective compensation. Overall, we show that damage to any of the nodes of the MD network is sufficient to impair top-down control of sensation, providing a common mechanism for impaired change detection across dementia syndromes.SIGNIFICANCE STATEMENT Previous evidence for fronto-parietal networks controlling perception is largely associative and may be confounded by task difficulty. Here, we use a preattentive measure of automatic auditory change detection [mismatch negativity (MMN) magnetoencephalography (MEG)] to show that neurodegeneration in any frontal or parietal multiple demand (MD) node impairs primary auditory cortex (A1) neurophysiological response to change through top-down mechanisms. This explains why the impaired ability to respond to change is a core feature across dementias, and other conditions driven by brain network dysfunction, such as schizophrenia. It validates theoretical frameworks in which neurodegenerating networks upregulate connectivity as partially effective compensation. The significance extends beyond network science and dementia, in its construct validation of dynamic causal modeling (DCM), and human confirmation of frequency-resolved analyses of animal neurodegeneration models.
Collapse
Affiliation(s)
- Thomas E Cope
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
- Cambridge University Hospitals NHS Trust, Cambridge CB2 0SZ, United Kingdom
| | - Laura E Hughes
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
| | - Holly N Phillips
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge, Cambridge CB2 7EF, United Kingdom
| | - Natalie E Adams
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - Amirhossein Jafarian
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - David Nesbitt
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
| | - Moataz Assem
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
| | - Alexandra Woolgar
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
| | - John Duncan
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge, Cambridge CB2 7EF, United Kingdom
- Department of Experimental Psychology, University of Oxford, Oxford OX1 3UD, United Kingdom
| | - James B Rowe
- Department of Clinical Neurosciences, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
- Cognition and Brain Sciences Unit, Medical Research Council, Cambridge CB2 7EF, United Kingdom
- Cambridge Centre for Ageing and Neuroscience (Cam-CAN), University of Cambridge, Cambridge CB2 7EF, United Kingdom
- Cambridge University Hospitals NHS Trust, Cambridge CB2 0SZ, United Kingdom
| |
Collapse
|
33
|
Marchetta P, Eckert P, Lukowski R, Ruth P, Singer W, Rüttiger L, Knipper M. Loss of central mineralocorticoid or glucocorticoid receptors impacts auditory nerve processing in the cochlea. iScience 2022; 25:103981. [PMID: 35281733 PMCID: PMC8914323 DOI: 10.1016/j.isci.2022.103981] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/26/2022] [Accepted: 02/21/2022] [Indexed: 02/08/2023] Open
Abstract
The key auditory signature that may associate peripheral hearing with central auditory cognitive defects remains elusive. Suggesting the involvement of stress receptors, we here deleted the mineralocorticoid and glucocorticoid receptors (MR and GR) using a CaMKIIα-based tamoxifen-inducible CreERT2/loxP approach to generate mice with single or double deletion of central but not cochlear MR and GR. Hearing thresholds of MRGRCaMKIIαCreERT2 conditional knockouts (cKO) were unchanged, whereas auditory nerve fiber (ANF) responses were larger and faster and auditory steady state responses were improved. Subsequent analysis of single MR or GR cKO revealed discrete roles for both, central MR and GR on cochlear functions. Limbic MR deletion reduced inner hair cell (IHC) ribbon numbers and ANF responses. In contrast, GR deletion shortened the latency and improved the synchronization to amplitude-modulated tones without affecting IHC ribbon numbers. These findings imply that stress hormone-dependent functions of central MR/GR contribute to “precognitive” sound processing in the cochlea. Top-down MR/GR signaling differentially contributes to cochlear sound processing Limbic MR stimulates auditory nerve fiber discharge rates Central GR deteriorates auditory nerve fiber synchrony
Collapse
Affiliation(s)
- Philine Marchetta
- University of Tübingen, Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, Molecular Physiology of Hearing, Elfriede-Aulhorn-Straße 5, 72076 Tübingen, Germany
| | - Philipp Eckert
- University of Tübingen, Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, Molecular Physiology of Hearing, Elfriede-Aulhorn-Straße 5, 72076 Tübingen, Germany
| | - Robert Lukowski
- University of Tübingen, Institute of Pharmacy, Pharmacology, Toxicology and Clinical Pharmacy, 72076 Tübingen, Germany
| | - Peter Ruth
- University of Tübingen, Institute of Pharmacy, Pharmacology, Toxicology and Clinical Pharmacy, 72076 Tübingen, Germany
| | - Wibke Singer
- University of Tübingen, Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, Molecular Physiology of Hearing, Elfriede-Aulhorn-Straße 5, 72076 Tübingen, Germany
| | - Lukas Rüttiger
- University of Tübingen, Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, Molecular Physiology of Hearing, Elfriede-Aulhorn-Straße 5, 72076 Tübingen, Germany
| | - Marlies Knipper
- University of Tübingen, Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, Molecular Physiology of Hearing, Elfriede-Aulhorn-Straße 5, 72076 Tübingen, Germany
| |
Collapse
|
34
|
Schaffer KM, Wauters L, Berstis K, Grasso SM, Henry ML. Modified script training for nonfluent/agrammatic primary progressive aphasia with significant hearing loss: A single-case experimental design. Neuropsychol Rehabil 2022; 32:306-335. [PMID: 33023372 PMCID: PMC8252664 DOI: 10.1080/09602011.2020.1822188] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Speech-language pathology caseloads often include individuals with hearing loss and a coexisting neurogenic communication disorder. However, specific treatment techniques and modifications designed to accommodate this population are understudied. Using a single-case experimental design, the current study investigated the utility of modified Video Implemented Script Training for Aphasia (VISTA) for an individual with nonfluent/agrammatic variant primary progressive aphasia and severe-to-profound, bilateral hearing loss. We analyzed the impact of this intervention, which incorporates orthographic input and rehearsal, on script production accuracy, speech intelligibility, grammatical complexity, mean length of utterance, and speech rate. Treatment resulted in comparable positive outcomes relative to a previous study evaluating script training in nonfluent/agrammatic primary progressive aphasia patients with functional hearing. Follow-up data obtained at three months, six months, and one year post-treatment confirmed maintenance of treatment effects for trained scripts. To our knowledge, this is the first study to investigate a modified speech-language intervention tailored to the needs of an individual with PPA and hearing loss, with findings confirming that simple treatment modifications may serve to broaden the range of treatment options available to those with concomitant sensory and communication impairments.
Collapse
Affiliation(s)
- Kristin M. Schaffer
- Department of Communication Sciences and Disorders, The University of Texas, Austin
| | - Lisa Wauters
- Department of Communication Sciences and Disorders, The University of Texas, Austin
| | - Karinne Berstis
- Department of Communication Sciences and Disorders, The University of Texas, Austin
| | - Stephanie M. Grasso
- Department of Communication Sciences and Disorders, The University of Texas, Austin
| | - Maya L. Henry
- Department of Communication Sciences and Disorders, The University of Texas, Austin
| |
Collapse
|
35
|
Palaniyappan L. Dissecting the neurobiology of linguistic disorganisation and impoverishment in schizophrenia. Semin Cell Dev Biol 2021; 129:47-60. [PMID: 34507903 DOI: 10.1016/j.semcdb.2021.08.015] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2021] [Revised: 08/13/2021] [Accepted: 05/06/2021] [Indexed: 12/16/2022]
Abstract
Schizophrenia provides a quintessential disease model of how disturbances in the molecular mechanisms of neurodevelopment lead to disruptions in the emergence of cognition. The central and often persistent feature of this illness is the disorganisation and impoverishment of language and related expressive behaviours. Though clinically more prominent, the periodic perceptual distortions characterised as psychosis are non-specific and often episodic. While several insights into psychosis have been gained based on study of the dopaminergic system, the mechanistic basis of linguistic disorganisation and impoverishment is still elusive. Key findings from cellular to systems-level studies highlight the role of ubiquitous, inhibitory processes in language production. Dysregulation of these processes at critical time periods, in key brain areas, provides a surprisingly parsimonious account of linguistic disorganisation and impoverishment in schizophrenia. This review links the notion of excitatory/inhibitory (E/I) imbalance at cortical microcircuits to the expression of language behaviour characteristic of schizophrenia, through the building blocks of neurochemistry, neurophysiology, and neurocognition.
Collapse
Affiliation(s)
- Lena Palaniyappan
- Department of Psychiatry,University of Western Ontario, London, Ontario, Canada; Robarts Research Institute,University of Western Ontario, London, Ontario, Canada; Lawson Health Research Institute, London, Ontario, Canada.
| |
Collapse
|
36
|
Expertise Modulates Neural Stimulus-Tracking. eNeuro 2021; 8:ENEURO.0065-21.2021. [PMID: 34341067 PMCID: PMC8371925 DOI: 10.1523/eneuro.0065-21.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 06/14/2021] [Accepted: 06/16/2021] [Indexed: 11/21/2022] Open
Abstract
How does the brain anticipate information in language? When people perceive speech, low-frequency (<10 Hz) activity in the brain synchronizes with bursts of sound and visual motion. This phenomenon, called cortical stimulus-tracking, is thought to be one way that the brain predicts the timing of upcoming words, phrases, and syllables. In this study, we test whether stimulus-tracking depends on domain-general expertise or on language-specific prediction mechanisms. We go on to examine how the effects of expertise differ between frontal and sensory cortex. We recorded electroencephalography (EEG) from human participants who were experts in either sign language or ballet, and we compared stimulus-tracking between groups while participants watched videos of sign language or ballet. We measured stimulus-tracking by computing coherence between EEG recordings and visual motion in the videos. Results showed that stimulus-tracking depends on domain-general expertise, and not on language-specific prediction mechanisms. At frontal channels, fluent signers showed stronger coherence to sign language than to dance, whereas expert dancers showed stronger coherence to dance than to sign language. At occipital channels, however, the two groups of participants did not show different patterns of coherence. These results are difficult to explain by entrainment of endogenous oscillations, because neither sign language nor dance show any periodicity at the frequencies of significant expertise-dependent stimulus-tracking. These results suggest that the brain may rely on domain-general predictive mechanisms to optimize perception of temporally-predictable stimuli such as speech, sign language, and dance.
Collapse
|
37
|
Benhamou E, Zhao S, Sivasathiaseelan H, Johnson JCS, Requena-Komuro MC, Bond RL, van Leeuwen JEP, Russell LL, Greaves CV, Nelson A, Nicholas JM, Hardy CJD, Rohrer JD, Warren JD. Decoding expectation and surprise in dementia: the paradigm of music. Brain Commun 2021; 3:fcab173. [PMID: 34423301 PMCID: PMC8376684 DOI: 10.1093/braincomms/fcab173] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/31/2021] [Indexed: 01/08/2023] Open
Abstract
Making predictions about the world and responding appropriately to unexpected events are essential functions of the healthy brain. In neurodegenerative disorders, such as frontotemporal dementia and Alzheimer's disease, impaired processing of 'surprise' may underpin a diverse array of symptoms, particularly abnormalities of social and emotional behaviour, but is challenging to characterize. Here, we addressed this issue using a novel paradigm: music. We studied 62 patients (24 female; aged 53-88) representing major syndromes of frontotemporal dementia (behavioural variant, semantic variant primary progressive aphasia, non-fluent-agrammatic variant primary progressive aphasia) and typical amnestic Alzheimer's disease, in relation to 33 healthy controls (18 female; aged 54-78). Participants heard famous melodies containing no deviants or one of three types of deviant note-acoustic (white-noise burst), syntactic (key-violating pitch change) or semantic (key-preserving pitch change). Using a regression model that took elementary perceptual, executive and musical competence into account, we assessed accuracy detecting melodic deviants and simultaneously recorded pupillary responses and related these to deviant surprise value (information-content) and carrier melody predictability (entropy), calculated using an unsupervised machine learning model of music. Neuroanatomical associations of deviant detection accuracy and coupling of detection to deviant surprise value were assessed using voxel-based morphometry of patients' brain MRI. Whereas Alzheimer's disease was associated with normal deviant detection accuracy, behavioural and semantic variant frontotemporal dementia syndromes were associated with strikingly similar profiles of impaired syntactic and semantic deviant detection accuracy and impaired behavioural and autonomic sensitivity to deviant information-content (all P < 0.05). On the other hand, non-fluent-agrammatic primary progressive aphasia was associated with generalized impairment of deviant discriminability (P < 0.05) due to excessive false-alarms, despite retained behavioural and autonomic sensitivity to deviant information-content and melody predictability. Across the patient cohort, grey matter correlates of acoustic deviant detection accuracy were identified in precuneus, mid and mesial temporal regions; correlates of syntactic deviant detection accuracy and information-content processing, in inferior frontal and anterior temporal cortices, putamen and nucleus accumbens; and a common correlate of musical salience coding in supplementary motor area (all P < 0.05, corrected for multiple comparisons in pre-specified regions of interest). Our findings suggest that major dementias have distinct profiles of sensory 'surprise' processing, as instantiated in music. Music may be a useful and informative paradigm for probing the predictive decoding of complex sensory environments in neurodegenerative proteinopathies, with implications for understanding and measuring the core pathophysiology of these diseases.
Collapse
Affiliation(s)
- Elia Benhamou
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, Oxford OX2 6GG, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Rebecca L Bond
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Janneke E P van Leeuwen
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Lucy L Russell
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Caroline V Greaves
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Annabel Nelson
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jennifer M Nicholas
- Department of Medical Statistics, Faculty of Epidemiology and Population Health, London School of Hygiene and Tropical Medicine, London, UK
| | - Chris J D Hardy
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jonathan D Rohrer
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jason D Warren
- Dementia Research Centre, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
38
|
Klimovich-Gray A, Barrena A, Agirre E, Molinaro N. One Way or Another: Cortical Language Areas Flexibly Adapt Processing Strategies to Perceptual And Contextual Properties of Speech. Cereb Cortex 2021; 31:4092-4103. [PMID: 33825884 DOI: 10.1093/cercor/bhab071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2020] [Revised: 02/24/2021] [Accepted: 02/25/2021] [Indexed: 11/13/2022] Open
Abstract
Cortical circuits rely on the temporal regularities of speech to optimize signal parsing for sound-to-meaning mapping. Bottom-up speech analysis is accelerated by top-down predictions about upcoming words. In everyday communications, however, listeners are regularly presented with challenging input-fluctuations of speech rate or semantic content. In this study, we asked how reducing speech temporal regularity affects its processing-parsing, phonological analysis, and ability to generate context-based predictions. To ensure that spoken sentences were natural and approximated semantic constraints of spontaneous speech we built a neural network to select stimuli from large corpora. We analyzed brain activity recorded with magnetoencephalography during sentence listening using evoked responses, speech-to-brain synchronization and representational similarity analysis. For normal speech theta band (6.5-8 Hz) speech-to-brain synchronization was increased and the left fronto-temporal areas generated stronger contextual predictions. The reverse was true for temporally irregular speech-weaker theta synchronization and reduced top-down effects. Interestingly, delta-band (0.5 Hz) speech tracking was greater when contextual/semantic predictions were lower or if speech was temporally jittered. We conclude that speech temporal regularity is relevant for (theta) syllabic tracking and robust semantic predictions while the joint support of temporal and contextual predictability reduces word and phrase-level cortical tracking (delta).
Collapse
Affiliation(s)
| | - Ander Barrena
- Computer Science Faculty, University of the Basque Country, Donostia, 20018, San Sebastian, Spain
| | - Eneko Agirre
- Computer Science Faculty, University of the Basque Country, Donostia, 20018, San Sebastian, Spain
| | - Nicola Molinaro
- BCBL, Basque Center on Cognition, Brain and Language, Donostia, 20009, San Sebastian, Spain.,Ikerbasque, Basque Foundation for Science, 48009, Bilbao, Spain
| |
Collapse
|
39
|
Kocagoncu E, Klimovich-Gray A, Hughes LE, Rowe JB. Evidence and implications of abnormal predictive coding in dementia. Brain 2021; 144:3311-3321. [PMID: 34240109 PMCID: PMC8677549 DOI: 10.1093/brain/awab254] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2020] [Revised: 03/15/2021] [Accepted: 06/17/2021] [Indexed: 11/14/2022] Open
Abstract
The diversity of cognitive deficits and neuropathological processes associated with dementias has encouraged divergence in pathophysiological explanations of disease. Here, we review an alternative framework that emphasizes convergent critical features of cognitive pathophysiology. Rather than the loss of ‘memory centres’ or ‘language centres’, or singular neurotransmitter systems, cognitive deficits are interpreted in terms of aberrant predictive coding in hierarchical neural networks. This builds on advances in normative accounts of brain function, specifically the Bayesian integration of beliefs and sensory evidence in which hierarchical predictions and prediction errors underlie memory, perception, speech and behaviour. We describe how analogous impairments in predictive coding in parallel neurocognitive systems can generate diverse clinical phenomena, including the characteristics of dementias. The review presents evidence from behavioural and neurophysiological studies of perception, language, memory and decision-making. The reformulation of cognitive deficits in terms of predictive coding has several advantages. It brings diverse clinical phenomena into a common framework; it aligns cognitive and movement disorders; and it makes specific predictions on cognitive physiology that support translational and experimental medicine studies. The insights into complex human cognitive disorders from the predictive coding framework may therefore also inform future therapeutic strategies.
Collapse
Affiliation(s)
- Ece Kocagoncu
- Cambridge Centre for Frontotemporal Dementia, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK
| | | | - Laura E Hughes
- Cambridge Centre for Frontotemporal Dementia, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.,Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| | - James B Rowe
- Cambridge Centre for Frontotemporal Dementia, Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.,Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, Cambridge, UK
| |
Collapse
|
40
|
Momtaz S, Moncrieff D, Bidelman GM. Dichotic listening deficits in amblyaudia are characterized by aberrant neural oscillations in auditory cortex. Clin Neurophysiol 2021; 132:2152-2162. [PMID: 34284251 DOI: 10.1016/j.clinph.2021.04.022] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2021] [Revised: 04/16/2021] [Accepted: 04/29/2021] [Indexed: 12/25/2022]
Abstract
OBJECTIVE Children diagnosed with auditory processing disorder (APD) show deficits in processing complex sounds that are associated with difficulties in higher-order language, learning, cognitive, and communicative functions. Amblyaudia (AMB) is a subcategory of APD characterized by abnormally large ear asymmetries in dichotic listening tasks. METHODS Here, we examined frequency-specific neural oscillations and functional connectivity via high-density electroencephalography (EEG) in children with and without AMB during passive listening of nonspeech stimuli. RESULTS Time-frequency maps of these "brain rhythms" revealed stronger phase-locked beta-gamma (~35 Hz) oscillations in AMB participants within bilateral auditory cortex for sounds presented to the right ear, suggesting a hypersynchronization and imbalance of auditory neural activity. Brain-behavior correlations revealed neural asymmetries in cortical responses predicted the larger than normal right-ear advantage seen in participants with AMB. Additionally, we found weaker functional connectivity in the AMB group from right to left auditory cortex, despite their stronger neural responses overall. CONCLUSION Our results reveal abnormally large auditory sensory encoding and an imbalance in communication between cerebral hemispheres (ipsi- to -contralateral signaling) in AMB. SIGNIFICANCE These neurophysiological changes might lead to the functionally poorer behavioral capacity to integrate information between the two ears in children with AMB.
Collapse
Affiliation(s)
- Sara Momtaz
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Deborah Moncrieff
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| | - Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA
| |
Collapse
|
41
|
Stalpaert J, Miatton M, Sieben A, Van Langenhove T, van Mierlo P, De Letter M. The Electrophysiological Correlates of Phoneme Perception in Primary Progressive Aphasia: A Preliminary Case Series. Front Hum Neurosci 2021; 15:618549. [PMID: 34149376 PMCID: PMC8206281 DOI: 10.3389/fnhum.2021.618549] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 04/30/2021] [Indexed: 12/03/2022] Open
Abstract
Aims: This study aimed to investigate phoneme perception in patients with primary progressive aphasia (PPA) by using the event-related potential (ERP) technique. These ERP components might contribute to the diagnostic process of PPA and its clinical variants (NFV: nonfluent variant, SV: semantic variant, LV: logopenic variant) and reveal insights about phoneme perception processes in these patients. Method: Phoneme discrimination and categorization processes were investigated by the mismatch negativity (MMN) and P300 in eight persons with early- and late-stage PPA (3 NFV, 2 LV, 2 SV, and 1 PPA-NOS; not otherwise specified) and 30 age-matched healthy adults. The mean amplitude, the onset latency, and the topographic distribution of both components in each patient were compared to the results of the control group. Results: The MMN was absent or the onset latency of the MMN was delayed in the patients with the NFV, LV, and PPA-NOS in comparison to the control group. In contrast, no differences in mean amplitudes and onset latencies of the MMN were found between the patients with the SV and the control group. Concerning the P300, variable results were found in the patients with the NFV, SV, and PPA-NOS, but the P300 of both patients with the LV was delayed and prolonged with increased mean amplitude in comparison to the control group. Conclusion: In this preliminary study, phoneme discrimination deficits were found in the patients with the NFV and LV, and variable deficits in phoneme categorization processes were found in all patients with PPA. In clinical practice, the MMN might be valuable to differentiate the SV from the NFV and the LV and the P300 to differentiate the LV from the NFV and the SV. Further research in larger and independent patient groups is required to investigate the applicability of these components in the diagnostic process and to determine the nature of these speech perception deficits in the clinical variants of PPA.
Collapse
Affiliation(s)
- Jara Stalpaert
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| | - Marijke Miatton
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | - Anne Sieben
- Department of Neurology, Ghent University Hospital, Ghent, Belgium
| | | | - Pieter van Mierlo
- Department of Electronics and Information Systems, Medical Image and Signal Processing Group, Ghent University, Ghent, Belgium
| | - Miet De Letter
- Department of Rehabilitation Sciences, Faculty of Medicine and Health Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
42
|
Daikoku T, Wiggins GA, Nagai Y. Statistical Properties of Musical Creativity: Roles of Hierarchy and Uncertainty in Statistical Learning. Front Neurosci 2021; 15:640412. [PMID: 33958983 PMCID: PMC8093513 DOI: 10.3389/fnins.2021.640412] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/10/2021] [Indexed: 12/18/2022] Open
Abstract
Creativity is part of human nature and is commonly understood as a phenomenon whereby something original and worthwhile is formed. Owing to this ability, humans can produce innovative information that often facilitates growth in our society. Creativity also contributes to esthetic and artistic productions, such as music and art. However, the mechanism by which creativity emerges in the brain remains debatable. Recently, a growing body of evidence has suggested that statistical learning contributes to creativity. Statistical learning is an innate and implicit function of the human brain and is considered essential for brain development. Through statistical learning, humans can produce and comprehend structured information, such as music. It is thought that creativity is linked to acquired knowledge, but so-called "eureka" moments often occur unexpectedly under subconscious conditions, without the intention to use the acquired knowledge. Given that a creative moment is intrinsically implicit, we postulate that some types of creativity can be linked to implicit statistical knowledge in the brain. This article reviews neural and computational studies on how creativity emerges within the framework of statistical learning in the brain (i.e., statistical creativity). Here, we propose a hierarchical model of statistical learning: statistically chunking into a unit (hereafter and shallow statistical learning) and combining several units (hereafter and deep statistical learning). We suggest that deep statistical learning contributes dominantly to statistical creativity in music. Furthermore, the temporal dynamics of perceptual uncertainty can be another potential causal factor in statistical creativity. Considering that statistical learning is fundamental to brain development, we also discuss how typical versus atypical brain development modulates hierarchical statistical learning and statistical creativity. We believe that this review will shed light on the key roles of statistical learning in musical creativity and facilitate further investigation of how creativity emerges in the brain.
Collapse
Affiliation(s)
- Tatsuya Daikoku
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
| | - Geraint A. Wiggins
- AI Lab, Vrije Universiteit Brussel, Brussels, Belgium
- School of Electronic Engineering and Computer Science, Queen Mary University of London, London, United Kingdom
| | - Yukie Nagai
- International Research Center for Neurointelligence (WPI-IRCN), The University of Tokyo, Tokyo, Japan
- Institute for AI and Beyond, The University of Tokyo, Tokyo, Japan
| |
Collapse
|
43
|
Jiang J, Benhamou E, Waters S, Johnson JCS, Volkmer A, Weil RS, Marshall CR, Warren JD, Hardy CJD. Processing of Degraded Speech in Brain Disorders. Brain Sci 2021; 11:394. [PMID: 33804653 PMCID: PMC8003678 DOI: 10.3390/brainsci11030394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/15/2021] [Accepted: 03/18/2021] [Indexed: 11/30/2022] Open
Abstract
The speech we hear every day is typically "degraded" by competing sounds and the idiosyncratic vocal characteristics of individual speakers. While the comprehension of "degraded" speech is normally automatic, it depends on dynamic and adaptive processing across distributed neural networks. This presents the brain with an immense computational challenge, making degraded speech processing vulnerable to a range of brain disorders. Therefore, it is likely to be a sensitive marker of neural circuit dysfunction and an index of retained neural plasticity. Considering experimental methods for studying degraded speech and factors that affect its processing in healthy individuals, we review the evidence for altered degraded speech processing in major neurodegenerative diseases, traumatic brain injury and stroke. We develop a predictive coding framework for understanding deficits of degraded speech processing in these disorders, focussing on the "language-led dementias"-the primary progressive aphasias. We conclude by considering prospects for using degraded speech as a probe of language network pathophysiology, a diagnostic tool and a target for therapeutic intervention.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Sheena Waters
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jeremy C. S. Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK;
| | - Rimona S. Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Charles R. Marshall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jason D. Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Chris J. D. Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| |
Collapse
|
44
|
Adams NE, Hughes LE, Rouse MA, Phillips HN, Shaw AD, Murley AG, Cope TE, Bevan-Jones WR, Passamonti L, Street D, Holland N, Nesbitt D, Friston K, Rowe JB. GABAergic cortical network physiology in frontotemporal lobar degeneration. Brain 2021; 144:2135-2145. [PMID: 33710299 PMCID: PMC8370432 DOI: 10.1093/brain/awab097] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 12/31/2020] [Accepted: 01/03/2021] [Indexed: 11/23/2022] Open
Abstract
The clinical syndromes caused by frontotemporal lobar degeneration are heterogeneous, including the behavioural variant frontotemporal dementia (bvFTD) and progressive supranuclear palsy. Although pathologically distinct, they share many behavioural, cognitive and physiological features, which may in part arise from common deficits of major neurotransmitters such as γ-aminobutyric acid (GABA). Here, we quantify the GABAergic impairment and its restoration with dynamic causal modelling of a double-blind placebo-controlled crossover pharmaco-magnetoencephalography study. We analysed 17 patients with bvFTD, 15 patients with progressive supranuclear palsy, and 20 healthy age- and gender-matched controls. In addition to neuropsychological assessment and structural MRI, participants undertook two magnetoencephalography sessions using a roving auditory oddball paradigm: once on placebo and once on 10 mg of the oral GABA reuptake inhibitor tiagabine. A subgroup underwent ultrahigh-field magnetic resonance spectroscopy measurement of GABA concentration, which was reduced among patients. We identified deficits in frontotemporal processing using conductance-based biophysical models of local and global neuronal networks. The clinical relevance of this physiological deficit is indicated by the correlation between top-down connectivity from frontal to temporal cortex and clinical measures of cognitive and behavioural change. A critical validation of the biophysical modelling approach was evidence from parametric empirical Bayes analysis that GABA levels in patients, measured by spectroscopy, were related to posterior estimates of patients’ GABAergic synaptic connectivity. Further evidence for the role of GABA in frontotemporal lobar degeneration came from confirmation that the effects of tiagabine on local circuits depended not only on participant group, but also on individual baseline GABA levels. Specifically, the phasic inhibition of deep cortico-cortical pyramidal neurons following tiagabine, but not placebo, was a function of GABA concentration. The study provides proof-of-concept for the potential of dynamic causal modelling to elucidate mechanisms of human neurodegenerative disease, and explains the variation in response to candidate therapies among patients. The laminar- and neurotransmitter-specific features of the modelling framework, can be used to study other treatment approaches and disorders. In the context of frontotemporal lobar degeneration, we suggest that neurophysiological restoration in selected patients, by targeting neurotransmitter deficits, could be used to bridge between clinical and preclinical models of disease, and inform the personalized selection of drugs and stratification of patients for future clinical trials.
Collapse
Affiliation(s)
- Natalie E Adams
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK
| | - Laura E Hughes
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,MMRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK
| | - Matthew A Rouse
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK
| | - Holly N Phillips
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK
| | | | - Alexander G Murley
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - Thomas E Cope
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,MMRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - W Richard Bevan-Jones
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - Luca Passamonti
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - Duncan Street
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - Negin Holland
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - David Nesbitt
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,MMRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| | - Karl Friston
- Wellcome Centre for Human Neuroimaging, University College London, London WC1N 3AR, UK
| | - James B Rowe
- Department of Clinical Neurosciences, Cambridge Biomedical Campus, University of Cambridge, Cambridge CB2 0QQ, UK.,MMRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, UK.,Cambridge University Hospitals, Cambridge, CB2 0QQ, UK
| |
Collapse
|
45
|
Johnson JCS, Marshall CR, Weil RS, Bamiou DE, Hardy CJD, Warren JD. Hearing and dementia: from ears to brain. Brain 2021; 144:391-401. [PMID: 33351095 PMCID: PMC7940169 DOI: 10.1093/brain/awaa429] [Citation(s) in RCA: 81] [Impact Index Per Article: 27.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 10/02/2020] [Accepted: 10/17/2020] [Indexed: 12/19/2022] Open
Abstract
The association between hearing impairment and dementia has emerged as a major public health challenge, with significant opportunities for earlier diagnosis, treatment and prevention. However, the nature of this association has not been defined. We hear with our brains, particularly within the complex soundscapes of everyday life: neurodegenerative pathologies target the auditory brain, and are therefore predicted to damage hearing function early and profoundly. Here we present evidence for this proposition, based on structural and functional features of auditory brain organization that confer vulnerability to neurodegeneration, the extensive, reciprocal interplay between 'peripheral' and 'central' hearing dysfunction, and recently characterized auditory signatures of canonical neurodegenerative dementias (Alzheimer's disease, Lewy body disease and frontotemporal dementia). Moving beyond any simple dichotomy of ear and brain, we argue for a reappraisal of the role of auditory cognitive dysfunction and the critical coupling of brain to peripheral organs of hearing in the dementias. We call for a clinical assessment of real-world hearing in these diseases that moves beyond pure tone perception to the development of novel auditory 'cognitive stress tests' and proximity markers for the early diagnosis of dementia and management strategies that harness retained auditory plasticity.
Collapse
Affiliation(s)
- Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Charles R Marshall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
- Movement Disorders Centre, Department of Clinical and Movement Neurosciences, UCL Queen Square Institute of Neurology, University College London, London, UK
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute for Health Research, University College London, London, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London, UK
| |
Collapse
|
46
|
Cope TE, Weil RS, Düzel E, Dickerson BC, Rowe JB. Advances in neuroimaging to support translational medicine in dementia. J Neurol Neurosurg Psychiatry 2021; 92:263-270. [PMID: 33568448 PMCID: PMC8862738 DOI: 10.1136/jnnp-2019-322402] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2020] [Revised: 11/17/2020] [Accepted: 11/18/2020] [Indexed: 12/11/2022]
Abstract
Advances in neuroimaging are ideally placed to facilitate the translation from progress made in cellular genetics and molecular biology of neurodegeneration into improved diagnosis, prevention and treatment of dementia. New positron emission tomography (PET) ligands allow one to quantify neuropathology, inflammation and metabolism in vivo safely and reliably, to examine mechanisms of human disease and support clinical trials. Developments in MRI-based imaging and neurophysiology provide complementary quantitative assays of brain function and connectivity, for the direct testing of hypotheses of human pathophysiology. Advances in MRI are also improving the quantitative imaging of vascular risk and comorbidities. In combination with large datasets, open data and artificial intelligence analysis methods, new informatics-based approaches are set to enable accurate single-subject inferences for diagnosis, prediction and treatment that have the potential to deliver precision medicine for dementia. Here, we show, through the use of critically appraised worked examples, how neuroimaging can bridge the gaps between molecular biology, neural circuits and the dynamics of the core systems that underpin complex behaviours. We look beyond traditional structural imaging used routinely in clinical care, to include ultrahigh field MRI (7T MRI), magnetoencephalography and PET with novel ligands. We illustrate their potential as safe, robust and sufficiently scalable to be viable for experimental medicine studies and clinical trials. They are especially informative when combined in multimodal studies, with model-based analyses to test precisely defined hypotheses.
Collapse
Affiliation(s)
- Thomas Edmund Cope
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK .,MRC Cognition and Brain Sciences Unit, Cambridge, UK.,Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| | - Rimona Sharon Weil
- Dementia Research Centre, University College London, London, UK.,National Hospital for Neurology & Neurosurgery, Queen square, London, UK.,Wellcome Centre for Human Neuroimaging, University College London, London, UK.,Movement Disorders Centre, University College London, London, UK
| | - Emrah Düzel
- Otto-von-Guericke-University Magdeburg Institute of Cognitive Neurology and Dementia Research, Magdeburg, Sachsen-Anhalt, Germany.,German Centre for Neurodegenerative Diseases (DZNE), Magdeburg, Germany.,Center for Behavioral Brain Sciences (CBBS), Magdeburg, Germany.,Institute of Cognitive Neuroscience, University College London, London, UK
| | - Bradford C Dickerson
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts, USA.,Frontotemporal Disorders Unit, Department of Neurology, Massachusetts General Hospital and Harvard Medical School, Charlestown, Massachusetts, USA
| | - James Benedict Rowe
- Department of Clinical Neurosciences, University of Cambridge, Cambridge, UK.,MRC Cognition and Brain Sciences Unit, Cambridge, UK.,Cambridge University Hospitals NHS Foundation Trust, Cambridge, UK
| |
Collapse
|
47
|
Peterson KA, Patterson K, Rowe JB. Language impairment in progressive supranuclear palsy and corticobasal syndrome. J Neurol 2021; 268:796-809. [PMID: 31321513 PMCID: PMC7914167 DOI: 10.1007/s00415-019-09463-1] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2019] [Revised: 07/06/2019] [Accepted: 07/09/2019] [Indexed: 12/11/2022]
Abstract
Although commonly known as movement disorders, progressive supranuclear palsy (PSP) and corticobasal syndrome (CBS) may present with changes in speech and language alongside or even before motor symptoms. The differential diagnosis of these two disorders can be challenging, especially in the early stages. Here we review their impact on speech and language. We discuss the neurobiological and clinical-phenomenological overlap of PSP and CBS with each other, and with other disorders including non-fluent agrammatic primary progressive aphasia and primary progressive apraxia of speech. Because language impairment is often an early and persistent problem in CBS and PSP, there is a need for improved methods for language screening in primary and secondary care, and more detailed language assessments in tertiary healthcare settings. Improved language assessment may aid differential diagnosis as well as inform clinical management decisions.
Collapse
Affiliation(s)
- Katie A Peterson
- Department of Clinical Neurosciences and MRC Cognition and Brain Sciences Unit, University of Cambridge, Herchel Smith Building for Brain and Mind Sciences, Forvie Site, Robinson Way, Cambridge, CB2 0SZ, UK.
| | - Karalyn Patterson
- Department of Clinical Neurosciences and MRC Cognition and Brain Sciences Unit, University of Cambridge, Herchel Smith Building for Brain and Mind Sciences, Forvie Site, Robinson Way, Cambridge, CB2 0SZ, UK
| | - James B Rowe
- Department of Clinical Neurosciences and MRC Cognition and Brain Sciences Unit, University of Cambridge, Herchel Smith Building for Brain and Mind Sciences, Forvie Site, Robinson Way, Cambridge, CB2 0SZ, UK
| |
Collapse
|
48
|
Carter JA, Bidelman GM. Auditory cortex is susceptible to lexical influence as revealed by informational vs. energetic masking of speech categorization. Brain Res 2021; 1759:147385. [PMID: 33631210 DOI: 10.1016/j.brainres.2021.147385] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2020] [Revised: 02/15/2021] [Accepted: 02/16/2021] [Indexed: 02/02/2023]
Abstract
Speech perception requires the grouping of acoustic information into meaningful phonetic units via the process of categorical perception (CP). Environmental masking influences speech perception and CP. However, it remains unclear at which stage of processing (encoding, decision, or both) masking affects listeners' categorization of speech signals. The purpose of this study was to determine whether linguistic interference influences the early acoustic-phonetic conversion process inherent to CP. To this end, we measured source level, event related brain potentials (ERPs) from auditory cortex (AC) and inferior frontal gyrus (IFG) as listeners rapidly categorized speech sounds along a /da/ to /ga/ continuum presented in three listening conditions: quiet, and in the presence of forward (informational masker) and time-reversed (energetic masker) 2-talker babble noise. Maskers were matched in overall SNR and spectral content and thus varied only in their degree of linguistic interference (i.e., informational masking). We hypothesized a differential effect of informational versus energetic masking on behavioral and neural categorization responses, where we predicted increased activation of frontal regions when disambiguating speech from noise, especially during lexical-informational maskers. We found (1) informational masking weakens behavioral speech phoneme identification above and beyond energetic masking; (2) low-level AC activity not only codes speech categories but is susceptible to higher-order lexical interference; (3) identifying speech amidst noise recruits a cross hemispheric circuit (ACleft → IFGright) whose engagement varies according to task difficulty. These findings provide corroborating evidence for top-down influences on the early acoustic-phonetic analysis of speech through a coordinated interplay between frontotemporal brain areas.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
49
|
Ruksenaite J, Volkmer A, Jiang J, Johnson JC, Marshall CR, Warren JD, Hardy CJ. Primary Progressive Aphasia: Toward a Pathophysiological Synthesis. Curr Neurol Neurosci Rep 2021; 21:7. [PMID: 33543347 PMCID: PMC7861583 DOI: 10.1007/s11910-021-01097-z] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/13/2021] [Indexed: 12/14/2022]
Abstract
PURPOSE OF REVIEW The term primary progressive aphasia (PPA) refers to a diverse group of dementias that present with prominent and early problems with speech and language. They present considerable challenges to clinicians and researchers. RECENT FINDINGS Here, we review critical issues around diagnosis of the three major PPA variants (semantic variant PPA, nonfluent/agrammatic variant PPA, logopenic variant PPA), as well as considering 'fragmentary' syndromes. We next consider issues around assessing disease stage, before discussing physiological phenotyping of proteinopathies across the PPA spectrum. We also review evidence for core central auditory impairments in PPA, outline critical challenges associated with treatment, discuss pathophysiological features of each major PPA variant, and conclude with thoughts on key challenges that remain to be addressed. New findings elucidating the pathophysiology of PPA represent a major step forward in our understanding of these diseases, with implications for diagnosis, care, management, and therapies.
Collapse
Affiliation(s)
- Justina Ruksenaite
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Jeremy Cs Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK
| | - Chris Jd Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, 8 - 11 Queen Square, London, WC1N 3BG, UK.
| |
Collapse
|
50
|
Asilador A, Llano DA. Top-Down Inference in the Auditory System: Potential Roles for Corticofugal Projections. Front Neural Circuits 2021; 14:615259. [PMID: 33551756 PMCID: PMC7862336 DOI: 10.3389/fncir.2020.615259] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 12/17/2020] [Indexed: 01/28/2023] Open
Abstract
It has become widely accepted that humans use contextual information to infer the meaning of ambiguous acoustic signals. In speech, for example, high-level semantic, syntactic, or lexical information shape our understanding of a phoneme buried in noise. Most current theories to explain this phenomenon rely on hierarchical predictive coding models involving a set of Bayesian priors emanating from high-level brain regions (e.g., prefrontal cortex) that are used to influence processing at lower-levels of the cortical sensory hierarchy (e.g., auditory cortex). As such, virtually all proposed models to explain top-down facilitation are focused on intracortical connections, and consequently, subcortical nuclei have scarcely been discussed in this context. However, subcortical auditory nuclei receive massive, heterogeneous, and cascading descending projections at every level of the sensory hierarchy, and activation of these systems has been shown to improve speech recognition. It is not yet clear whether or how top-down modulation to resolve ambiguous sounds calls upon these corticofugal projections. Here, we review the literature on top-down modulation in the auditory system, primarily focused on humans and cortical imaging/recording methods, and attempt to relate these findings to a growing animal literature, which has primarily been focused on corticofugal projections. We argue that corticofugal pathways contain the requisite circuitry to implement predictive coding mechanisms to facilitate perception of complex sounds and that top-down modulation at early (i.e., subcortical) stages of processing complement modulation at later (i.e., cortical) stages of processing. Finally, we suggest experimental approaches for future studies on this topic.
Collapse
Affiliation(s)
- Alexander Asilador
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
| | - Daniel A. Llano
- Neuroscience Program, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
- Beckman Institute for Advanced Science and Technology, Urbana, IL, United States
- Molecular and Integrative Physiology, The University of Illinois at Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|