201
|
Ren J, Hubbard CS, Ahveninen J, Cui W, Li M, Peng X, Luan G, Han Y, Li Y, Shinn AK, Wang D, Li L, Liu H. Dissociable Auditory Cortico-Cerebellar Pathways in the Human Brain Estimated by Intrinsic Functional Connectivity. Cereb Cortex 2021; 31:2898-2912. [PMID: 33497437 PMCID: PMC8107796 DOI: 10.1093/cercor/bhaa398] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/10/2020] [Accepted: 12/11/2020] [Indexed: 12/16/2022] Open
Abstract
The cerebellum, a structure historically associated with motor control, has more recently been implicated in several higher-order auditory-cognitive functions. However, the exact functional pathways that mediate cerebellar influences on auditory cortex (AC) remain unclear. Here, we sought to identify auditory cortico-cerebellar pathways based on intrinsic functional connectivity magnetic resonance imaging. In contrast to previous connectivity studies that principally consider the AC as a single functionally homogenous unit, we mapped the cerebellar connectivity across different parts of the AC. Our results reveal that auditory subareas demonstrating different levels of interindividual functional variability are functionally coupled with distinct cerebellar regions. Moreover, auditory and sensorimotor areas show divergent cortico-cerebellar connectivity patterns, although sensorimotor areas proximal to the AC are often functionally grouped with the AC in previous connectivity-based network analyses. Lastly, we found that the AC can be functionally segmented into highly similar subareas based on either cortico-cerebellar or cortico-cortical functional connectivity, suggesting the existence of multiple parallel auditory cortico-cerebellar circuits that involve different subareas of the AC. Overall, the present study revealed multiple auditory cortico-cerebellar pathways and provided a fine-grained map of AC subareas, indicative of the critical role of the cerebellum in auditory processing and multisensory integration.
Collapse
Affiliation(s)
- Jianxun Ren
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Catherine S Hubbard
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Weigang Cui
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
- Department of Automation Sciences and Electrical Engineering, Beihang University, 100083 Beijing, China
| | - Meiling Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Xiaolong Peng
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Guoming Luan
- Department of Neurosurgery, Comprehensive Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, 100093 Beijing, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, 100053 Beijing, China
| | - Yang Li
- Department of Automation Sciences and Electrical Engineering, Beihang University, 100083 Beijing, China
| | - Ann K Shinn
- Psychotic Disorders Division, McLean Hospital, Harvard Medical School, Belmont, MA 02478, USA
| | - Danhong Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Luming Li
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China
- Precision Medicine & Healthcare Research Center, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055 Shenzhen, China
- IDG/McGovern Institute for Brain Research at Tsinghua University, 100084 Beijing, China
| | - Hesheng Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| |
Collapse
|
202
|
Ohashi H, Ostry DJ. Neural Development of Speech Sensorimotor Learning. J Neurosci 2021; 41:4023-4035. [PMID: 33758018 PMCID: PMC8176761 DOI: 10.1523/jneurosci.2884-20.2021] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 03/11/2021] [Accepted: 03/15/2021] [Indexed: 11/21/2022] Open
Abstract
The development of the human brain continues through to early adulthood. It has been suggested that cortical plasticity during this protracted period of development shapes circuits in associative transmodal regions of the brain. Here we considered how cortical plasticity during development might contribute to the coordinated brain activity required for speech motor learning. Specifically, we examined patterns of brain functional connectivity (FC), whose strength covaried with the capacity for speech audio-motor adaptation in children ages 5-12 and in young adults of both sexes. Children and adults showed distinct patterns of the encoding of learning in the brain. Adult performance was associated with connectivity in transmodal regions that integrate auditory and somatosensory information, whereas children rely on basic somatosensory and motor circuits. A progressive reliance on transmodal regions is consistent with human cortical development and suggests that human speech motor adaptation abilities are built on cortical remodeling, which is observable in late childhood and is stabilized in adults.SIGNIFICANCE STATEMENT A protracted period of neuro plasticity during human development is associated with extensive reorganization of associative cortex. We examined how the relationship between FC and speech motor learning capacity are reconfigured in conjunction with this cortical reorganization. Young adults and children aged 5-12 years showed distinctly different patterns. Mature brain networks related to learning included associative cortex, which integrates auditory and somatosensory feedback in speech, whereas the immature networks in children included motor regions of the brain. These patterns are consistent with the cortical reorganization that is initiated in late childhood. The result provides insights into the human biology of speech as well as to the mature neural mechanisms for multisensory integration in motor learning.
Collapse
Affiliation(s)
- Hiroki Ohashi
- Department of Psychology, McGill University, Montréal, Québec H3A 1G1, Canada
- Haskins Laboratories, New Haven, Connecticut 06511
| | - David J Ostry
- Department of Psychology, McGill University, Montréal, Québec H3A 1G1, Canada
- Haskins Laboratories, New Haven, Connecticut 06511
| |
Collapse
|
203
|
Yogev-Seligmann G, Eisenstein T, Ash E, Giladi N, Sharon H, Nachman S, Bregman N, Kodesh E, Hendler T, Lerner Y. Neurocognitive Plasticity Is Associated with Cardiorespiratory Fitness Following Physical Exercise in Older Adults with Amnestic Mild Cognitive Impairment. J Alzheimers Dis 2021; 81:91-112. [PMID: 33720893 DOI: 10.3233/jad-201429] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Aerobic training has been shown to promote structural and functional neurocognitive plasticity in cognitively intact older adults. However, little is known about the neuroplastic potential of aerobic exercise in individuals at risk of Alzheimer's disease (AD) and dementia. OBJECTIVE We aimed to explore the effect of aerobic exercise intervention and cardiorespiratory fitness improvement on brain and cognitive functions in older adults with amnestic mild cognitive impairment (aMCI). METHODS 27 participants with aMCI were randomized to either aerobic training (n = 13) or balance and toning (BAT) control group (n = 14) for a 16-week intervention. Pre- and post-assessments included functional MRI experiments of brain activation during associative memory encoding and neural synchronization during complex information processing, cognitive evaluation using neuropsychological tests, and cardiorespiratory fitness assessment. RESULTS The aerobic group demonstrated increased frontal activity during memory encoding and increased neural synchronization in higher-order cognitive regions such as the frontal cortex and temporo-parietal junction (TPJ) following the intervention. In contrast, the BAT control group demonstrated decreased brain activity during memory encoding, primarily in occipital, temporal, and parietal areas. Increases in cardiorespiratory fitness were associated with increases in brain activationin both the left inferior frontal and precentral gyri. Furthermore, changes in cardiorespiratory fitness were also correlated with changes in performance on several neuropsychological tests. CONCLUSION Aerobic exercise training may result in functional plasticity of high-order cognitive areas, especially, frontal regions, among older adults at risk of AD and dementia. Furthermore, cardiorespiratory fitness may be an important mediating factor of the observed changes in neurocognitive functions.
Collapse
Affiliation(s)
- Galit Yogev-Seligmann
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Tamir Eisenstein
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Elissa Ash
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Department of Neurology, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Nir Giladi
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Department of Neurology, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| | - Haggai Sharon
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,Pain Management & Neuromodulation Centre, Guy's & St Thomas' NHS Foundation Trust, London, UK.,Institute of Pain Medicine, Department of Anesthesiology and Critical Care Medicine, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Shikma Nachman
- Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Noa Bregman
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Department of Neurology, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel
| | - Einat Kodesh
- Department of Physical Therapy Faculty of Social Welfare & Health Sciences, University of Haifa, Haifa, Israel
| | - Talma Hendler
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel.,School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
| | - Yulia Lerner
- Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel.,Sagol Brain Institute, Tel Aviv Sourasky Medical Center, Tel Aviv, Israel.,Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
| |
Collapse
|
204
|
Sensory attenuation is modulated by the contrasting effects of predictability and control. Neuroimage 2021; 237:118103. [PMID: 33957233 DOI: 10.1016/j.neuroimage.2021.118103] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Revised: 03/18/2021] [Accepted: 04/23/2021] [Indexed: 11/22/2022] Open
Abstract
Self-generated stimuli have been found to elicit a reduced sensory response compared with externally-generated stimuli. However, much of the literature has not adequately controlled for differences in the temporal predictability and temporal control of stimuli. In two experiments, we compared the N1 (and P2) components of the auditory-evoked potential to self- and externally-generated tones that differed with respect to these two factors. In Experiment 1 (n = 42), we found that increasing temporal predictability reduced N1 amplitude in a manner that may often account for the observed reduction in sensory response to self-generated sounds. We also observed that reducing temporal control over the tones resulted in a reduction in N1 amplitude. The contrasting effects of temporal predictability and temporal control on N1 amplitude meant that sensory attenuation prevailed when controlling for each. Experiment 2 (n = 38) explored the potential effect of selective attention on the results of Experiment 1 by modifying task requirements such that similar levels of attention were allocated to the visual stimuli across conditions. The results of Experiment 2 replicated those of Experiment 1, and suggested that the observed effects of temporal control and sensory attenuation were not driven by differences in attention. Given that self- and externally-generated sensations commonly differ with respect to both temporal predictability and temporal control, findings of the present study may necessitate a re-evaluation of the experimental paradigms used to study sensory attenuation.
Collapse
|
205
|
Panachakel JT, Ramakrishnan AG. Decoding Covert Speech From EEG-A Comprehensive Review. Front Neurosci 2021; 15:642251. [PMID: 33994922 PMCID: PMC8116487 DOI: 10.3389/fnins.2021.642251] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/18/2021] [Indexed: 11/13/2022] Open
Abstract
Over the past decade, many researchers have come up with different implementations of systems for decoding covert or imagined speech from EEG (electroencephalogram). They differ from each other in several aspects, from data acquisition to machine learning algorithms, due to which, a comparison between different implementations is often difficult. This review article puts together all the relevant works published in the last decade on decoding imagined speech from EEG into a single framework. Every important aspect of designing such a system, such as selection of words to be imagined, number of electrodes to be recorded, temporal and spatial filtering, feature extraction and classifier are reviewed. This helps a researcher to compare the relative merits and demerits of the different approaches and choose the one that is most optimal. Speech being the most natural form of communication which human beings acquire even without formal education, imagined speech is an ideal choice of prompt for evoking brain activity patterns for a BCI (brain-computer interface) system, although the research on developing real-time (online) speech imagery based BCI systems is still in its infancy. Covert speech based BCI can help people with disabilities to improve their quality of life. It can also be used for covert communication in environments that do not support vocal communication. This paper also discusses some future directions, which will aid the deployment of speech imagery based BCI for practical applications, rather than only for laboratory experiments.
Collapse
Affiliation(s)
- Jerrin Thomas Panachakel
- Medical Intelligence and Language Engineering Laboratory, Department of Electrical Engineering, Indian Institute of Science, Bangalore, India
| | | |
Collapse
|
206
|
Chen YC, Yong W, Xing C, Feng Y, Haidari NA, Xu JJ, Gu JP, Yin X, Wu Y. Directed functional connectivity of the hippocampus in patients with presbycusis. Brain Imaging Behav 2021; 14:917-926. [PMID: 31270776 DOI: 10.1007/s11682-019-00162-z] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Presbycusis, associated with a diminished quality of life characterized by bilateral sensorineural hearing loss at high frequencies, has become an increasingly critical public health problem. This study aimed to identify directed functional connectivity (FC) of the hippocampus in patients with presbycusis and to explore the causes if the directed functional connections of the hippocampus were disrupted. Presbycusis patients (n = 32) and age-, sex-, and education-matched healthy controls (n = 40) were included in this study. The seed regions of bilateral hippocampus were selected to identify directed FC in patients with presbycusis using Granger causality analysis (GCA) approach. Correlation analyses were conducted to detect the associations of disrupted directed FC of hippocampus with clinical measures of presbycusis. Compared to healthy controls, decreased directed FC between inferior parietal lobule, insula, right supplementary motor area, middle temporal gyrus and hippocampus were detected in presbycusis patients. Furthermore, a negative correlation between TMB score and the decline of directed FC from left inferior parietal lobule to left hippocampus (r = -0.423, p = 0.025) and from right inferior parietal lobule to right hippocampus (r = -0.516, p = 0.005) were also observed. The decreased directed functional connections of the hippocampus were detected in patients with presbycusis, which was associated with specific cognitive performance. This study mainly emphasizes the crucial role of hippocampus in presbycusis and will enhance our understanding of the neuropathological mechanisms of presbycusis.
Collapse
Affiliation(s)
- Yu-Chen Chen
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Wei Yong
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Chunhua Xing
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Yuan Feng
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Nasir Ahmad Haidari
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jin-Jing Xu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Jian-Ping Gu
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China
| | - Xindao Yin
- Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| | - Yuanqing Wu
- Department of Otolaryngology, Nanjing First Hospital, Nanjing Medical University, No.68, Changle Road, Nanjing, 210006, China.
| |
Collapse
|
207
|
Heard M, Li X, Lee YS. Hybrid auditory fMRI: In pursuit of increasing data acquisition while decreasing the impact of scanner noise. J Neurosci Methods 2021; 358:109198. [PMID: 33901568 DOI: 10.1016/j.jneumeth.2021.109198] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Revised: 03/28/2021] [Accepted: 04/16/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Two challenges in auditory fMRI include the loud scanner noise during sound presentation and slow data acquisition. Here, we introduce a new auditory imaging protocol, termed "hybrid", that alleviates these obstacles. NEW METHOD We designed a within-subject experiment (N = 14) wherein language-driven activity was measured by hybrid, interleaved silent (ISSS), and continuous multiband acquisition. To determine the advantage of noise attenuation during sound presentation, hybrid was compared to multiband. To identify the benefits of increased temporal resolution, hybrid was compared to ISSS. Data were evaluated by whole-brain univariate general linear modeling (GLM) and multivariate pattern analysis (MVPA). RESULTS Comparison with existing methods: CONCLUSIONS: Our data revealed that hybrid imaging restored neural activity in the canonical language network that was absent due to the loud noise or slow sampling in the conventional imaging protocols. With its noise-attenuated sound presentation windows and increased acquisition speed, the hybrid protocol is well-suited for auditory fMRI research tracking neural activity pertaining to fast, time-varying acoustic events.
Collapse
Affiliation(s)
- Matthew Heard
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States
| | - Xiangrui Li
- Center for Cognitive and Behavioral Brain Imaging, The Ohio State University, United States
| | - Yune S Lee
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States; Center for BrainHealth, University of Texas at Dallas, United States.
| |
Collapse
|
208
|
Perron M, Theaud G, Descoteaux M, Tremblay P. The frontotemporal organization of the arcuate fasciculus and its relationship with speech perception in young and older amateur singers and non-singers. Hum Brain Mapp 2021; 42:3058-3076. [PMID: 33835629 PMCID: PMC8193549 DOI: 10.1002/hbm.25416] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 02/26/2021] [Accepted: 03/08/2021] [Indexed: 12/11/2022] Open
Abstract
The ability to perceive speech in noise (SPiN) declines with age. Although the etiology of SPiN decline is not well understood, accumulating evidence suggests a role for the dorsal speech stream. While age‐related decline within the dorsal speech stream would negatively affect SPiN performance, experience‐induced neuroplastic changes within the dorsal speech stream could positively affect SPiN performance. Here, we investigated the relationship between SPiN performance and the structure of the arcuate fasciculus (AF), which forms the white matter scaffolding of the dorsal speech stream, in aging singers and non‐singers. Forty‐three non‐singers and 41 singers aged 20 to 87 years old completed a hearing evaluation and a magnetic resonance imaging session that included High Angular Resolution Diffusion Imaging. The groups were matched for sex, age, education, handedness, cognitive level, and musical instrument experience. A subgroup of participants completed syllable discrimination in the noise task. The AF was divided into 10 segments to explore potential local specializations for SPiN. The results show that, in carefully matched groups of singers and non‐singers (a) myelin and/or axonal membrane deterioration within the bilateral frontotemporal AF segments are associated with SPiN difficulties in aging singers and non‐singers; (b) the structure of the AF is different in singers and non‐singers; (c) these differences are not associated with a benefit on SPiN performance for singers. This study clarifies the etiology of SPiN difficulties by supporting the hypothesis for the role of aging of the dorsal speech stream.
Collapse
Affiliation(s)
- Maxime Perron
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| | - Guillaume Theaud
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Maxime Descoteaux
- Sherbrooke Connectivity Imaging Lab (SCIL), Computer Science Department, Université de Sherbrooke, Sherbrooke, Quebec, Canada
| | - Pascale Tremblay
- CERVO Brain Research Center, Quebec City, Quebec, Canada.,Département de Réadaptation, Université Laval, Faculté de Médecine, Quebec City, Quebec, Canada
| |
Collapse
|
209
|
Milton CK, Dhanaraj V, Young IM, Taylor HM, Nicholas PJ, Briggs RG, Bai MY, Fonseka RD, Hormovas J, Lin Y, Tanglay O, Conner AK, Glenn CA, Teo C, Doyen S, Sughrue ME. Parcellation-based anatomic model of the semantic network. Brain Behav 2021; 11:e02065. [PMID: 33599397 PMCID: PMC8035438 DOI: 10.1002/brb3.2065] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 12/16/2020] [Accepted: 01/17/2021] [Indexed: 01/08/2023] Open
Abstract
INTRODUCTION The semantic network is an important mediator of language, enabling both speech production and the comprehension of multimodal stimuli. A major challenge in the field of neurosurgery is preventing semantic deficits. Multiple cortical areas have been linked to semantic processing, though knowledge of network connectivity has lacked anatomic specificity. Using attentional task-based fMRI studies, we built a neuroanatomical model of this network. METHODS One hundred and fifty-five task-based fMRI studies related to categorization of visual words and objects, and auditory words and stories were used to generate an activation likelihood estimation (ALE). Cortical parcellations overlapping the ALE were used to construct a preliminary model of the semantic network based on the cortical parcellation scheme previously published under the Human Connectome Project. Deterministic fiber tractography was performed on 25 randomly chosen subjects from the Human Connectome Project, to determine the connectivity of the cortical parcellations comprising the network. RESULTS The ALE analysis demonstrated fourteen left hemisphere cortical regions to be a part of the semantic network: 44, 45, 55b, IFJa, 8C, p32pr, SFL, SCEF, 8BM, STSdp, STSvp, TE1p, PHT, and PBelt. These regions showed consistent interconnections between parcellations. Notably, the anterior temporal pole, a region often implicated in semantic function, was absent from our model. CONCLUSIONS We describe a preliminary cortical model for the underlying structural connectivity of the semantic network. Future studies will further characterize the neurotractographic details of the semantic network in the context of medical application.
Collapse
Affiliation(s)
- Camille K. Milton
- Department of NeurosurgeryUniversity of Oklahoma Health Sciences CenterOklahoma CityOKUSA
| | - Vukshitha Dhanaraj
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | | | | | | | - Robert G. Briggs
- Department of NeurosurgeryUniversity of Oklahoma Health Sciences CenterOklahoma CityOKUSA
| | - Michael Y. Bai
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | - Rannulu D. Fonseka
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | - Jorge Hormovas
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | - Yueh‐Hsin Lin
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | - Onur Tanglay
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | - Andrew K. Conner
- Department of NeurosurgeryUniversity of Oklahoma Health Sciences CenterOklahoma CityOKUSA
| | - Chad A. Glenn
- Department of NeurosurgeryUniversity of Oklahoma Health Sciences CenterOklahoma CityOKUSA
| | - Charles Teo
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| | | | - Michael E. Sughrue
- Department of NeurosurgeryPrince of Wales Private HospitalSydneyNSWAustralia
| |
Collapse
|
210
|
Araneda R, Silva Moura S, Dricot L, De Volder AG. Beat Detection Recruits the Visual Cortex in Early Blind Subjects. Life (Basel) 2021; 11:life11040296. [PMID: 33807372 PMCID: PMC8066101 DOI: 10.3390/life11040296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2021] [Revised: 03/25/2021] [Accepted: 03/29/2021] [Indexed: 11/16/2022] Open
Abstract
Using functional magnetic resonance imaging, here we monitored the brain activity in 12 early blind subjects and 12 blindfolded control subjects, matched for age, gender and musical experience, during a beat detection task. Subjects were required to discriminate regular ("beat") from irregular ("no beat") rhythmic sequences composed of sounds or vibrotactile stimulations. In both sensory modalities, the brain activity differences between the two groups involved heteromodal brain regions including parietal and frontal cortical areas and occipital brain areas, that were recruited in the early blind group only. Accordingly, early blindness induced brain plasticity changes in the cerebral pathways involved in rhythm perception, with a participation of the visually deprived occipital brain areas whatever the sensory modality for input. We conclude that the visually deprived cortex switches its input modality from vision to audition and vibrotactile sense to perform this temporal processing task, supporting the concept of a metamodal, multisensory organization of this cortex.
Collapse
Affiliation(s)
- Rodrigo Araneda
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Sandra Silva Moura
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
| | - Laurence Dricot
- Institute of Neuroscience (IoNS; NEUR Section), Université Catholique de Louvain, 1200 Brussels, Belgium;
| | - Anne G. De Volder
- Motor Skill Learning and Intensive Neurorehabilitation Laboratory (MSL-IN), Institute of Neuroscience (IoNS; COSY Section), Université Catholique de Louvain, 1200 Brussels, Belgium; (R.A.); (S.S.M.)
- Correspondence: ; Tel.: +32-2-764-54-82
| |
Collapse
|
211
|
Brueggemann P, Neff PKA, Meyer M, Riemer N, Rose M, Mazurek B. On the relationship between tinnitus distress, cognitive performance and aging. PROGRESS IN BRAIN RESEARCH 2021; 262:263-285. [PMID: 33931184 DOI: 10.1016/bs.pbr.2021.01.028] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this study we analyzed psychometric data of 107 individuals who suffer from chronic subjective tinnitus. In particular, we elucidated the relationship between tinnitus-related distress, psychological comorbidities, age, and hearing, and the performance in cognitive concentration and interference tests. Previous research has provided first evidence that individuals with tinnitus may have deficits in cognitive tasks. The present study aimed at extending former research by investigating the relationship between tinnitus distress and cognition. Statistical analyses comprised correlation and regression approaches. We observed a significant relationship between tinnitus distress (tinnitus score, TQ), age and hearing loss and the performance in tests on selective and sustained attention (d2 test) and cognitive interference (Stroop test). Tinnitus distress was identified as the most important predictor of cognitive performance (additionally age for cognitive interference). For other psychometric variables (perceived stress, PSQ; self-efficacy, optimism and pessimism, SWOP) and hearing loss we could not find any meaningful relationship with cognitive performance. The results clearly point to a (currently non-causal) relationship between cognitive skills and distress of tinnitus-related symptoms. Furthermore, the influence of age is noteworthy as this finding implies that with increasing age an appropriate coping with aversive tinnitus symptoms based on proper cognitive functions and age-related hearing dysfunctions, namely inhibition, may become more difficult. Hence, it is suggested to consider cognitive tests as a supplementary measurement in clinical assessment of tinnitus and to raise awareness for the impairing influence of tinnitus on cognition in daily life.
Collapse
Affiliation(s)
| | - Patrick K A Neff
- Department of Psychiatry and Psychotherapy, University of Regensburg, Regensburg, Germany; University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- University Research Priority Program "Dynamics of Healthy Aging", University of Zurich, Zurich, Switzerland; Division of Neuropsychology, Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Natalie Riemer
- Tinnitus-Zentrum, Charité-Universitaetsmedizin, Berlin, Germany
| | - Matthias Rose
- Department of Internal Medicine and Psychosomatics, Charité-Universitaetsmedizin, Berlin, Germany
| | - Birgit Mazurek
- Tinnitus-Zentrum, Charité-Universitaetsmedizin, Berlin, Germany.
| |
Collapse
|
212
|
MEG Intersubject Phase Locking of Stimulus-Driven Activity during Naturalistic Speech Listening Correlates with Musical Training. J Neurosci 2021; 41:2713-2722. [PMID: 33536196 DOI: 10.1523/jneurosci.0932-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2020] [Revised: 11/13/2020] [Accepted: 11/17/2020] [Indexed: 12/26/2022] Open
Abstract
Musical training is associated with increased structural and functional connectivity between auditory sensory areas and higher-order brain networks involved in speech and motor processing. Whether such changed connectivity patterns facilitate the cortical propagation of speech information in musicians remains poorly understood. We here used magnetoencephalography (MEG) source imaging and a novel seed-based intersubject phase-locking approach to investigate the effects of musical training on the interregional synchronization of stimulus-driven neural responses during listening to naturalistic continuous speech presented in silence. MEG data were obtained from 20 young human subjects (both sexes) with different degrees of musical training. Our data show robust bilateral patterns of stimulus-driven interregional phase synchronization between auditory cortex and frontotemporal brain regions previously associated with speech processing. Stimulus-driven phase locking was maximal in the delta band, but was also observed in the theta and alpha bands. The individual duration of musical training was positively associated with the magnitude of stimulus-driven alpha-band phase locking between auditory cortex and parts of the dorsal and ventral auditory processing streams. These findings provide evidence for a positive relationship between musical training and the propagation of speech-related information between auditory sensory areas and higher-order processing networks, even when speech is presented in silence. We suggest that the increased synchronization of higher-order cortical regions to auditory cortex may contribute to the previously described musician advantage in processing speech in background noise.SIGNIFICANCE STATEMENT Musical training has been associated with widespread structural and functional brain plasticity. It has been suggested that these changes benefit the production and perception of music but can also translate to other domains of auditory processing, such as speech. We developed a new magnetoencephalography intersubject analysis approach to study the cortical synchronization of stimulus-driven neural responses during the perception of continuous natural speech and its relationship to individual musical training. Our results provide evidence that musical training is associated with higher synchronization of stimulus-driven activity between brain regions involved in early auditory sensory and higher-order processing. We suggest that the increased synchronized propagation of speech information may contribute to the previously described musician advantage in processing speech in background noise.
Collapse
|
213
|
Fu Z, Monahan PJ. Extracting Phonetic Features From Natural Classes: A Mismatch Negativity Study of Mandarin Chinese Retroflex Consonants. Front Hum Neurosci 2021; 15:609898. [PMID: 33841113 PMCID: PMC8029992 DOI: 10.3389/fnhum.2021.609898] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Accepted: 02/23/2021] [Indexed: 11/13/2022] Open
Abstract
How speech sounds are represented in the brain is not fully understood. The mismatch negativity (MMN) has proven to be a powerful tool in this regard. The MMN event-related potential is elicited by a deviant stimulus embedded within a series of repeating standard stimuli. Listeners construct auditory memory representations of these standards despite acoustic variability. In most designs that test speech sounds, however, this variation is typically intra-category: All standards belong to the same phonetic category. In the current paper, inter-category variation is presented in the standards. These standards vary in manner of articulation but share a common phonetic feature. In the standard retroflex experimental block, Mandarin Chinese speaking participants are presented with a series of "standard" consonants that share the feature [retroflex], interrupted by infrequent non-retroflex deviants. In the non-retroflex standard experimental block, non-retroflex standards are interrupted by infrequent retroflex deviants. The within-block MMN was calculated, as was the identity MMN (iMMN) to account for intrinsic differences in responses to the stimuli. We only observed a within-block MMN to the non-retroflex deviant embedded in the standard retroflex block. This suggests that listeners extract [retroflex] despite significant inter-category variation. In the non-retroflex standard block, because there is little on which to base a coherent auditory memory representation, no within-block MMN was observed. The iMMN to the retroflex was observed in a late time-window at centro-parieto-occipital electrode sites instead of fronto-central electrodes, where the MMN is typically observed, potentially reflecting the increased difficulty posed by the added variation in the standards. In short, participants can construct auditory memory representations despite significant acoustic and inter-category phonological variation so long as a shared phonetic feature binds them together.
Collapse
Affiliation(s)
- Zhanao Fu
- Department of Linguistics, University of Toronto, Toronto, ON, Canada
| | - Philip J. Monahan
- Department of Linguistics, University of Toronto, Toronto, ON, Canada
- Department of Language Studies, University of Toronto Scarborough, Toronto, ON, Canada
- Department of Psychology, University of Toronto Scarborough, Toronto, ON, Canada
| |
Collapse
|
214
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18. [PMID: 33690177 DOI: 10.1101/2020.08.03.234997] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 05/24/2023]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
215
|
Mahmud MS, Yeasin M, Bidelman GM. Data-driven machine learning models for decoding speech categorization from evoked brain responses. J Neural Eng 2021; 18:10.1088/1741-2552/abecf0. [PMID: 33690177 PMCID: PMC8738965 DOI: 10.1088/1741-2552/abecf0] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2020] [Accepted: 03/09/2021] [Indexed: 11/12/2022]
Abstract
Objective.Categorical perception (CP) of audio is critical to understand how the human brain perceives speech sounds despite widespread variability in acoustic properties. Here, we investigated the spatiotemporal characteristics of auditory neural activity that reflects CP for speech (i.e. differentiates phonetic prototypes from ambiguous speech sounds).Approach.We recorded 64-channel electroencephalograms as listeners rapidly classified vowel sounds along an acoustic-phonetic continuum. We used support vector machine classifiers and stability selection to determine when and where in the brain CP was best decoded across space and time via source-level analysis of the event-related potentials.Main results. We found that early (120 ms) whole-brain data decoded speech categories (i.e. prototypical vs. ambiguous tokens) with 95.16% accuracy (area under the curve 95.14%;F1-score 95.00%). Separate analyses on left hemisphere (LH) and right hemisphere (RH) responses showed that LH decoding was more accurate and earlier than RH (89.03% vs. 86.45% accuracy; 140 ms vs. 200 ms). Stability (feature) selection identified 13 regions of interest (ROIs) out of 68 brain regions [including auditory cortex, supramarginal gyrus, and inferior frontal gyrus (IFG)] that showed categorical representation during stimulus encoding (0-260 ms). In contrast, 15 ROIs (including fronto-parietal regions, IFG, motor cortex) were necessary to describe later decision stages (later 300-800 ms) of categorization but these areas were highly associated with the strength of listeners' categorical hearing (i.e. slope of behavioral identification functions).Significance.Our data-driven multivariate models demonstrate that abstract categories emerge surprisingly early (∼120 ms) in the time course of speech processing and are dominated by engagement of a relatively compact fronto-temporal-parietal brain network.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, 3815 Central Avenue, Memphis, TN 38152, United States of America
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States of America
- School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States of America
- University of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, United States of America
| |
Collapse
|
216
|
Weiller C, Reisert M, Peto I, Hennig J, Makris N, Petrides M, Rijntjes M, Egger K. The ventral pathway of the human brain: A continuous association tract system. Neuroimage 2021; 234:117977. [PMID: 33757905 DOI: 10.1016/j.neuroimage.2021.117977] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 02/24/2021] [Accepted: 03/16/2021] [Indexed: 11/25/2022] Open
Abstract
The brain hemispheres can be divided into an upper dorsal and a lower ventral system. Each system consists of distinct cortical regions connected via long association tracts. The tracts cross the central sulcus or the limen insulae to connect the frontal lobe with the posterior brain. The dorsal stream is associated with sensorimotor mapping. The ventral stream serves structural analysis and semantics in different domains, as visual, acoustic or space processing. How does the prefrontal cortex, regarded as the platform for the highest level of integration, incorporate information from these different domains? In the current view, the ventral pathway consists of several separate tracts, related to different modalities. Originally the assumption was that the ventral path is a continuum, covering all modalities. The latter would imply a very different anatomical basis for cognitive and clinical models of processing. To further define the ventral connections, we used cutting-edge in vivo global tractography on high-resolution diffusion tensor imaging (DTI) data from 100 normal subjects from the human connectome project and ex vivo preparation of fiber bundles in the extreme capsule of 8 humans using the Klingler technique. Our data showed that ventral stream tracts, traversing through the extreme capsule, form a continuous band of fibers that fan out anteriorly to the prefrontal cortex, and posteriorly to temporal, occipital and parietal cortical regions. Introduction of additional volumes of interest in temporal and occipital lobes differentiated between the inferior fronto-occipital fascicle (IFOF) and uncinate fascicle (UF). Unequivocally, in both experiments, in all subjects a connection between the inferior frontal and middle-to-posterior temporal cortical region, otherwise known as the temporo-frontal extreme capsule fascicle (ECF) from nonhuman primate brain-tracing experiments was identified. In the human brain, this tract connects the language domains of "Broca's area" and "Wernicke's area". The differentiation in the three tracts, IFOF, UF and ECF seems arbitrary, all three pass through the extreme capsule. Our data show that the ventral pathway represents a continuum. The three tracts merge seamlessly and streamlines showed considerable overlap in their anterior and posterior course. Terminal maps identified prefrontal cortex in the frontal lobe and association cortex in temporal, occipital and parietal lobes as streamline endings. This anatomical substrate potentially facilitates the prefrontal cortex to integrate information across different domains and modalities.
Collapse
Affiliation(s)
- Cornelius Weiller
- Department of Neurology and Clinical Neuroscience, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106 Freiburg, Germany.
| | - Marco Reisert
- Department of Medical Physics, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Ivo Peto
- Department of Neuroradiology, Faculty of Medicine, University of Freiburg, Freiburg, Germany; Department of Neurosurgery and Brain Repair, University of South Florida, Morsani College of Medicine, Tampa, FL, USA
| | - Jürgen Hennig
- Department of Medical Physics, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| | - Nikos Makris
- Center for Morphometric Analysis, Department of Psychiatry and Neurology, A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital and Psychiatric Neuroimaging Laboratory, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, United States; Department of Anatomy and Neurobiology, Boston University School of Medicine, Boston, MA, United States
| | - Michael Petrides
- Department of Neurology and Neurosurgery, McGill University, Montreal, Quebec, Canada
| | - Michel Rijntjes
- Department of Neurology and Clinical Neuroscience, Faculty of Medicine, University of Freiburg, Breisacher Str. 64, 79106 Freiburg, Germany
| | - Karl Egger
- Department of Neuroradiology, Faculty of Medicine, University of Freiburg, Freiburg, Germany
| |
Collapse
|
217
|
Wei Z, Fan Z, Qi Z, Tong Y, Guo Q, Chen L. Reorganization of auditory-visual network interactions in long-term unilateral postlingual hearing loss. J Clin Neurosci 2021; 87:97-102. [PMID: 33863544 DOI: 10.1016/j.jocn.2021.02.017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2020] [Revised: 12/22/2020] [Accepted: 02/15/2021] [Indexed: 12/17/2022]
Abstract
Long-term unilateral hearing loss could reorganize the functional network association between the bilateral auditory cortices, while alterations of other functional networks need to be further explored. We attempted to investigate the pattern of the reorganization of functional network associations between the auditory and visual cortex caused by long-term postlingual unilateral hearing loss (UHI) and its relationship with clinical characteristics. Therefore, 48 patients with hearing loss caused by unilateral acoustic tumors and 52 matched healthy controls were enrolled, and their high-resolution structural MRI and resting-state functional MRI data were also collected to depict the brain network. Degree centrality (DC) was employed to evaluate the functional network association of the auditory-visual network interaction. Group comparisons were performed to investigate the network reorganization, and its correlations with clinical data were calculated. Compared with the healthy control group, patients with UHI showed significantly increased DC between the auditory network (superior temporal gyrus and the medial geniculate body) and the visual network. Meanwhile, this difference was positively correlated with the extent of hearing impairment, and the correlation was more significant with the ipsilateral superior temporal gyrus in cases of acoustic neuroma. These results suggest that long-term unilateral hearing impairment may lead to enhancement of the visual-auditory network interactions and that the degree of reorganization is positively correlated with the pure tone average (PTA) and is more significant for the ipsilateral superior temporal gyrus, which provides clinical evidence regarding cross-modal plasticity in the UHI and its lateralization.
Collapse
Affiliation(s)
- Zixuan Wei
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China
| | - Zhen Fan
- Neurosurgical Institute of Fudan University, China
| | - Zengxin Qi
- Shanghai Clinical Medical Center of Neurosurgery, China
| | - Yusheng Tong
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China
| | - Qinglong Guo
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China
| | - Liang Chen
- Department of Neurosurgery, Huashan Hospital, Shanghai Medical College, Fudan University, China; Neurosurgical Institute of Fudan University, China; Shanghai Clinical Medical Center of Neurosurgery, China.
| |
Collapse
|
218
|
Convergence of heteromodal lexical retrieval in the lateral prefrontal cortex. Sci Rep 2021; 11:6305. [PMID: 33737672 PMCID: PMC7973515 DOI: 10.1038/s41598-021-85802-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2020] [Accepted: 03/03/2021] [Indexed: 01/31/2023] Open
Abstract
Lexical retrieval requires selecting and retrieving the most appropriate word from the lexicon to express a desired concept. Few studies have probed lexical retrieval with tasks other than picture naming, and when non-picture naming lexical retrieval tasks have been applied, both convergent and divergent results emerged. The presence of a single construct for auditory and visual processes of lexical retrieval would influence cognitive rehabilitation strategies for patients with aphasia. In this study, we perform support vector regression lesion-symptom mapping using a brain tumor model to test the hypothesis that brain regions specifically involved in lexical retrieval from visual and auditory stimuli represent overlapping neural systems. We find that principal components analysis of language tasks revealed multicollinearity between picture naming, auditory naming, and a validated measure of word finding, implying the existence of redundant cognitive constructs. Nonparametric, multivariate lesion-symptom mapping across participants was used to model accuracies on each of the four language tasks. Lesions within overlapping clusters of 8,333 voxels and 21,512 voxels in the left lateral prefrontal cortex (PFC) were predictive of impaired picture naming and auditory naming, respectively. These data indicate a convergence of heteromodal lexical retrieval within the PFC.
Collapse
|
219
|
Leipold S, Klein C, Jäncke L. Musical Expertise Shapes Functional and Structural Brain Networks Independent of Absolute Pitch Ability. J Neurosci 2021; 41:2496-2511. [PMID: 33495199 PMCID: PMC7984587 DOI: 10.1523/jneurosci.1985-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/29/2020] [Revised: 11/11/2020] [Accepted: 11/17/2020] [Indexed: 11/21/2022] Open
Abstract
Professional musicians are a popular model for investigating experience-dependent plasticity in human large-scale brain networks. A minority of musicians possess absolute pitch, the ability to name a tone without reference. The study of absolute pitch musicians provides insights into how a very specific talent is reflected in brain networks. Previous studies of the effects of musicianship and absolute pitch on large-scale brain networks have yielded highly heterogeneous findings regarding the localization and direction of the effects. This heterogeneity was likely influenced by small samples and vastly different methodological approaches. Here, we conducted a comprehensive multimodal assessment of effects of musicianship and absolute pitch on intrinsic functional and structural connectivity using a variety of commonly used and state-of-the-art multivariate methods in the largest sample to date (n = 153 female and male human participants; 52 absolute pitch musicians, 51 non-absolute pitch musicians, and 50 non-musicians). Our results show robust effects of musicianship in interhemispheric and intrahemispheric connectivity in both structural and functional networks. Crucially, most of the effects were replicable in both musicians with and without absolute pitch compared with non-musicians. However, we did not find evidence for an effect of absolute pitch on intrinsic functional or structural connectivity in our data: The two musician groups showed strikingly similar networks across all analyses. Our results suggest that long-term musical training is associated with robust changes in large-scale brain networks. The effects of absolute pitch on neural networks might be subtle, requiring very large samples or task-based experiments to be detected.SIGNIFICANCE STATEMENT A question that has fascinated neuroscientists, psychologists, and musicologists for a long time is how musicianship and absolute pitch, the rare talent to name a tone without reference, are reflected in large-scale networks of the human brain. Much is still unknown as previous studies have reported widely inconsistent results based on small samples. Here, we investigate the largest sample of musicians and non-musicians to date (n = 153) using a multitude of established and novel analysis methods. Results provide evidence for robust effects of musicianship on functional and structural networks that were replicable in two separate groups of musicians and independent of absolute pitch ability.
Collapse
Affiliation(s)
- Simon Leipold
- Division of Neuropsychology, Department of Psychology, University of Zurich, 8050 Zurich, Switzerland
- Department of Psychiatry and Behavioral Sciences, Stanford University, School of Medicine, Stanford, California 94305
| | - Carina Klein
- Division of Neuropsychology, Department of Psychology, University of Zurich, 8050 Zurich, Switzerland
| | - Lutz Jäncke
- Division of Neuropsychology, Department of Psychology, University of Zurich, 8050 Zurich, Switzerland
- University Research Priority Program, Dynamics of Healthy Aging, University of Zurich, 8050 Zurich, Switzerland
| |
Collapse
|
220
|
Martinez Oeckel A, Rijntjes M, Glauche V, Kümmerer D, Kaller CP, Egger K, Weiller C. The extreme capsule and aphasia: proof-of-concept of a new way relating structure to neurological symptoms. Brain Commun 2021; 3:fcab040. [PMID: 33870191 PMCID: PMC8042249 DOI: 10.1093/braincomms/fcab040] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 01/15/2021] [Accepted: 02/02/2021] [Indexed: 12/12/2022] Open
Abstract
We present anatomy-based symptom-lesion mapping to assess the association between lesions of tracts in the extreme capsule and aphasia. The study cohort consisted of 123 patients with acute left-hemispheric stroke without a lesion of language-related cortical areas of the Stanford atlas of functional regions of interest. On templates generated through global fibre tractography, lesions of the extreme capsule and of the arcuate fascicle were quantified and correlated with the occurrence of aphasia (n = 18) as defined by the Token Test. More than 15% damage of the slice plane through the extreme capsule was a strong independent predictor of aphasia in stroke patients, odds ratio 16.37, 95% confidence interval: 3.11–86.16, P < 0.01. In contrast, stroke lesions of >15% in the arcuate fascicle were not associated with aphasia. Our results support the relevance of a ventral pathway in the language network running through the extreme capsule.
Collapse
Affiliation(s)
- Ariane Martinez Oeckel
- Department of Neurology and Clinical Neurosciences, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Michel Rijntjes
- Department of Neurology and Clinical Neurosciences, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Volkmar Glauche
- Department of Neurology and Clinical Neurosciences, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Dorothee Kümmerer
- Department of Neurology and Clinical Neurosciences, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Christoph P Kaller
- Department of Neuroradiology, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Karl Egger
- Department of Neuroradiology, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| | - Cornelius Weiller
- Department of Neurology and Clinical Neurosciences, Faculty of Medicine, University of Freiburg, Freiburg 79106, Germany
| |
Collapse
|
221
|
Wang Q, Li HY, Li YD, Lv YT, Ma HB, Xiang AF, Jia XZ, Liu DQ. Resting-state abnormalities in functional connectivity of the default mode network in autism spectrum disorder: a meta-analysis. Brain Imaging Behav 2021; 15:2583-2592. [PMID: 33683528 DOI: 10.1007/s11682-021-00460-5] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2020] [Revised: 01/12/2021] [Accepted: 01/28/2021] [Indexed: 11/30/2022]
Abstract
Increasing evidence has shown that the resting state brain connectivity of default mode network (DMN) which are important for social cognition are disrupted in autism spectrum disorder (ASD). However, previous neuroimaging studies did not present consistent results. Therefore, we performed a meta-analysis of resting-state functional connectivity (rsFC) studies of DMN in the individuals with ASD and healthy controls (HCs) to provide a new perspective for investigating the pathophysiology of ASD. We carried out a search using the terms: ("ASD" OR "Autism") AND ("resting state" OR "rest") AND ("DMN" OR "default mode network") in PubMed, Web of Science and Embase to identify the researches published before January 2020. Ten resting state datasets including 203 patients and 208 HCs were included. Anisotropic Effect Size version of Signed Differential Mapping (AES-SDM) method was applied to identify group differences. In comparison with the HCs, the patients with ASD showed increased connectivity in cerebellum, right middle temporal gyrus, superior occipital gyrus, right supramarginal gyrus, supplementary motor area and putamen. Decreased connectivity was discovered in some nodes of DMN, such as medial prefrontal cortex, precuneus and angular gyrus. These results may help us to further clarify the neurobiological mechanisms in patients with ASD.
Collapse
Affiliation(s)
- Qing Wang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning Province, China
| | - Hua-Yun Li
- College of Teacher Education, Zhejiang Normal University, Jinhua, China
- Laboratory of Intelligent Education Technology and Application, Hangzhou, Zhejiang Province, China
| | - Yun-Da Li
- School of Information and Electronics Technology, Jiamusi University, Jiamusi, China
| | - Ya-Ting Lv
- Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China
| | - Hui-Bin Ma
- School of Information and Electronics Technology, Jiamusi University, Jiamusi, China
| | - An-Feng Xiang
- Tongji University School of Medicine, Shanghai, China
| | - Xi-Ze Jia
- Institute of Psychological Sciences, Hangzhou Normal University, Hangzhou, China.
- Zhejiang Key Laboratory for Research in Assessment of Cognitive Impairments, Hangzhou, China.
| | - Dong-Qiang Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.
- Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning Province, China.
| |
Collapse
|
222
|
Iwaki H, Sonoda M, Osawa SI, Silverstein BH, Mitsuhashi T, Ukishiro K, Takayama Y, Kambara T, Kakinuma K, Suzuki K, Tominaga T, Nakasato N, Iwasaki M, Asano E. Your verbal questions beginning with 'what' will rapidly deactivate the left prefrontal cortex of listeners. Sci Rep 2021; 11:5257. [PMID: 33664359 PMCID: PMC7933162 DOI: 10.1038/s41598-021-84610-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 02/15/2021] [Indexed: 12/31/2022] Open
Abstract
The left prefrontal cortex is essential for verbal communication. It remains uncertain at what timing, to what extent, and what type of phrase initiates left-hemispheric dominant prefrontal activation during comprehension of spoken sentences. We clarified this issue by measuring event-related high-gamma activity during a task to respond to three-phrase questions configured in different orders. Questions beginning with a wh-interrogative deactivated the left posterior prefrontal cortex right after the 1st phrase offset and the anterior prefrontal cortex after the 2nd phrase offset. Left prefrontal high-gamma activity augmented subsequently and maximized around the 3rd phrase offset. Conversely, questions starting with a concrete phrase deactivated the right orbitofrontal region and then activated the left posterior prefrontal cortex after the 1st phrase offset. Regardless of sentence types, high-gamma activity emerged earlier, by one phrase, in the left posterior prefrontal than anterior prefrontal region. Sentences beginning with a wh-interrogative may initially deactivate the left prefrontal cortex to prioritize the bottom-up processing of upcoming auditory information. A concrete phrase may obliterate the inhibitory function of the right orbitofrontal region and facilitate top-down lexical prediction by the left prefrontal cortex. The left anterior prefrontal regions may be recruited for semantic integration of multiple concrete phrases.
Collapse
Affiliation(s)
- Hirotaka Iwaki
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA.,Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Masaki Sonoda
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA.,Department of Neurosurgery, Graduate School of Medicine, Yokohama City University, Kanagawa, 2360004, Japan
| | - Shin-Ichiro Osawa
- Department of Neurosurgery, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan.
| | - Brian H Silverstein
- Translational Neuroscience Program, Wayne State University, Detroit, MI, 48201, USA
| | - Takumi Mitsuhashi
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA.,Department of Neurosurgery, School of Medicine, Juntendo University, Tokyo, 1138421, Japan
| | - Kazushi Ukishiro
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan.,Department of Neurosurgery, Graduate School of Medicine, Yokohama City University, Kanagawa, 2360004, Japan
| | - Yutaro Takayama
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan.,Department of Neurosurgery, Graduate School of Medicine, Yokohama City University, Kanagawa, 2360004, Japan.,Department of Neurosurgery, National Center of Neurology and Psychiatry, National Center Hospital, Tokyo, 1878551, Japan
| | - Toshimune Kambara
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA.,Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Kazuo Kakinuma
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Kyoko Suzuki
- Department of Behavioral Neurology and Cognitive Neuroscience, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Teiji Tominaga
- Department of Neurosurgery, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Nobukazu Nakasato
- Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Masaki Iwasaki
- Department of Neurosurgery, National Center of Neurology and Psychiatry, National Center Hospital, Tokyo, 1878551, Japan.
| | - Eishi Asano
- Department of Pediatrics, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA. .,Department of Neurology, Children's Hospital of Michigan, Wayne State University, Detroit, MI, 48201, USA.
| |
Collapse
|
223
|
Rocchi F, Oya H, Balezeau F, Billig AJ, Kocsis Z, Jenison RL, Nourski KV, Kovach CK, Steinschneider M, Kikuchi Y, Rhone AE, Dlouhy BJ, Kawasaki H, Adolphs R, Greenlee JDW, Griffiths TD, Howard MA, Petkov CI. Common fronto-temporal effective connectivity in humans and monkeys. Neuron 2021; 109:852-868.e8. [PMID: 33482086 PMCID: PMC7927917 DOI: 10.1016/j.neuron.2020.12.026] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2020] [Revised: 10/02/2020] [Accepted: 12/30/2020] [Indexed: 01/24/2023]
Abstract
Human brain pathways supporting language and declarative memory are thought to have differentiated substantially during evolution. However, cross-species comparisons are missing on site-specific effective connectivity between regions important for cognition. We harnessed functional imaging to visualize the effects of direct electrical brain stimulation in macaque monkeys and human neurosurgery patients. We discovered comparable effective connectivity between caudal auditory cortex and both ventro-lateral prefrontal cortex (VLPFC, including area 44) and parahippocampal cortex in both species. Human-specific differences were clearest in the form of stronger hemispheric lateralization effects. In humans, electrical tractography revealed remarkably rapid evoked potentials in VLPFC following auditory cortex stimulation and speech sounds drove VLPFC, consistent with prior evidence in monkeys of direct auditory cortex projections to homologous vocalization-responsive regions. The results identify a common effective connectivity signature in human and nonhuman primates, which from auditory cortex appears equally direct to VLPFC and indirect to the hippocampus. VIDEO ABSTRACT.
Collapse
Affiliation(s)
- Francesca Rocchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | | | - Zsuzsanna Kocsis
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Rick L Jenison
- Department of Neuroscience, University of Wisconsin - Madison, Madison, WI, USA
| | - Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | | | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Yukiko Kikuchi
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Brian J Dlouhy
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Ralph Adolphs
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA
| | - Jeremy D W Greenlee
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK; Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Wellcome Centre for Human Neuroimaging, University College London, London, UK
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| | - Christopher I Petkov
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne, UK.
| |
Collapse
|
224
|
Karami M, Mehvari Habibabadi J, Nilipour R, Barekatain M, Gaillard WD, Soltanian-Zadeh H. Presurgical Language Mapping in Patients With Intractable Epilepsy: A Review Study. Basic Clin Neurosci 2021; 12:163-176. [PMID: 34925713 PMCID: PMC8672671 DOI: 10.32598/bcn.12.2.2053.1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Revised: 10/10/2020] [Accepted: 11/08/2020] [Indexed: 02/01/2023] Open
Abstract
INTRODUCTION about 20% to 30% of patients with epilepsy are diagnosed with drug-resistant epilepsy and one third of these are candidates for epilepsy surgery. Surgical resection of the epileptogenic tissue is a well-established method for treating patients with intractable focal epilepsy. Determining language laterality and locality is an important part of a comprehensive epilepsy program before surgery. Functional Magnetic Resonance Imaging (fMRI) has been increasingly employed as a non-invasive alternative method for the Wada test and cortical stimulation. Sensitive and accurate language tasks are essential for any reliable fMRI mapping. METHODS The present study reviews the methods of presurgical fMRI language mapping and their dedicated fMRI tasks, specifically for patients with epilepsy. RESULTS Different language tasks including verbal fluency are used in fMRI to determine language laterality and locality in different languages such as Persian. there are some considerations including the language materials and technical protocols for task design that all presurgical teams should take into consideration. CONCLUSION Accurate presurgical language mapping is very important to preserve patients language after surgery. This review was the first part of a project for designing standard tasks in Persian to help precise presurgical evaluation and in Iranian PWFIE.
Collapse
Affiliation(s)
- Mahdieh Karami
- Institute for Cognitive Science Studies (ICSS), Tehran, Iran
| | | | - Reza Nilipour
- Department of Speech Therapy, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Majid Barekatain
- Department of Psychiatry, School of Medicine, Isfahan University of Medical Sciences, Isfahan, Iran
| | - William D. Gaillard
- Center for Neuroscience and Behavioral Health, Children’s National Medical Center, George Washington University, Washington, D.C. USA
| | - Hamid Soltanian-Zadeh
- Departments of Communication, School of Electrical and Computer Engineering, University of Tehran, Tehran, Iran
- Departments of Radiology and Research Administration, Henry Ford Health System, Detroit, MI, USA
| |
Collapse
|
225
|
Kim S, Schwalje AT, Liu AS, Gander PE, McMurray B, Griffiths TD, Choi I. Pre- and post-target cortical processes predict speech-in-noise performance. Neuroimage 2021; 228:117699. [PMID: 33387631 PMCID: PMC8291856 DOI: 10.1016/j.neuroimage.2020.117699] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 11/06/2020] [Accepted: 12/23/2020] [Indexed: 12/19/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task that recruits multiple cortical subsystems. There is a variance in individuals' ability to understand SiN that cannot be explained by simple hearing profiles, which suggests that central factors may underlie the variance in SiN ability. Here, we elucidated a few cortical functions involved during a SiN task and their contributions to individual variance using both within- and across-subject approaches. Through our within-subject analysis of source-localized electroencephalography, we investigated how acoustic signal-to-noise ratio (SNR) alters cortical evoked responses to a target word across the speech recognition areas, finding stronger responses in left supramarginal gyrus (SMG, BA40 the dorsal lexicon area) with quieter noise. Through an individual differences approach, we found that listeners show different neural sensitivity to the background noise and target speech, reflected in the amplitude ratio of earlier auditory-cortical responses to speech and noise, named as an internal SNR. Listeners with better internal SNR showed better SiN performance. Further, we found that the post-speech time SMG activity explains a further amount of variance in SiN performance that is not accounted for by internal SNR. This result demonstrates that at least two cortical processes contribute to SiN performance independently: pre-target time processing to attenuate neural representation of background noise and post-target time processing to extract information from speech sounds.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN 47907, USA
| | - Adam T Schwalje
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Andrew S Liu
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA
| | - Bob McMurray
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA; Department of Psychological and Brain Sciences, University of Iowa, Iowa City, IA 52242, USA
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne NE1 7RU, UK
| | - Inyong Choi
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA 52242, USA.
| |
Collapse
|
226
|
Qiu Y, She S, Zhang S, Wu F, Liang Q, Peng Y, Yuan H, Ning Y, Wu H, Huang R. Cortical myelin content mediates differences in affective temperaments. J Affect Disord 2021; 282:1263-1271. [PMID: 33601705 DOI: 10.1016/j.jad.2021.01.038] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/14/2020] [Revised: 11/29/2020] [Accepted: 01/09/2021] [Indexed: 01/05/2023]
Abstract
BACKGROUND Affective temperaments are regarded as subclinical forms and precursors of mental disorders. It may serve as candidates to facilitate the diagnosis and prediction of mental disorders. Cortical myelination likely characterizes the neurodevelopment and the evolution of cognitive functions and reflects brain functional demand. However, little is known about the relationship between affective temperaments and myelin plasticity. This study aims to analyze the association between the affective temperaments and cortical myelin content (CMC) in human brain. METHODS We measured affective temperaments using the Temperament Evaluation of Memphis, Pisa, Paris and San Diego Autoquestionnaire (TEMPS-A) on 106 healthy adults and used the ratio of T1- and T2-weighted images as the proxy for CMC. Using the unsupervised k-means clustering algorithm, we classified the cortical gray matter into heavily, intermediately, and lightly myelinated regions. The correlation between affective temperaments and CMC was calculated separately for different myelinated regions. RESULTS Hyperthymic temperament correlated negatively with CMC in the heavily myelinated (right postcentral gyrus and bilateral precentral gyrus) and lightly myelinated (bilateral frontal and lateral temporal) regions. Cyclothymic temperament showed a downward parabola-like correlation with CMC across the heavily, intermediately, and lightly myel0inated areas of the bilateral parietal-temporal regions. LIMITATIONS The analysis was constrained to cortical regions. The results were obtained from healthy subjects and we did not acquired data from patients of affective disorder, which may compromise the generalizability of the present findings. CONCLUSION The findings suggest that hyperthymic and cyclothymic temperaments have a CMC basis in extensive brain regions.
Collapse
Affiliation(s)
- Yidan Qiu
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Shenglin She
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, 510370, China
| | - Shufei Zhang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Fengchun Wu
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, 510370, China
| | - Qunjun Liang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Yongjun Peng
- Department of Medical Imaging, Zhuhai People's Hospital (Zhuhai Hospital Affiliated with Jinan University), Zhuhai, 519000, China
| | - Haishan Yuan
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China
| | - Yuping Ning
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, 510370, China
| | - Huawang Wu
- The Affiliated Brain Hospital of Guangzhou Medical University (Guangzhou Huiai Hospital), Guangzhou, 510370, China; Guangdong Engineering Technology Research Center for Translational Medicine of Mental Disorders, Guangzhou, 510370, China.
| | - Ruiwang Huang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education; School of Psychology, Center for Studies of Psychological Application; Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, 510631, China.
| |
Collapse
|
227
|
Johnson JF, Belyk M, Schwartze M, Pinheiro AP, Kotz SA. Expectancy changes the self-monitoring of voice identity. Eur J Neurosci 2021; 53:2681-2695. [PMID: 33638190 PMCID: PMC8252045 DOI: 10.1111/ejn.15162] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 01/18/2021] [Accepted: 02/20/2021] [Indexed: 12/02/2022]
Abstract
Self‐voice attribution can become difficult when voice characteristics are ambiguous, but functional magnetic resonance imaging (fMRI) investigations of such ambiguity are sparse. We utilized voice‐morphing (self‐other) to manipulate (un‐)certainty in self‐voice attribution in a button‐press paradigm. This allowed investigating how levels of self‐voice certainty alter brain activation in brain regions monitoring voice identity and unexpected changes in voice playback quality. FMRI results confirmed a self‐voice suppression effect in the right anterior superior temporal gyrus (aSTG) when self‐voice attribution was unambiguous. Although the right inferior frontal gyrus (IFG) was more active during a self‐generated compared to a passively heard voice, the putative role of this region in detecting unexpected self‐voice changes during the action was demonstrated only when hearing the voice of another speaker and not when attribution was uncertain. Further research on the link between right aSTG and IFG is required and may establish a threshold monitoring voice identity in action. The current results have implications for a better understanding of the altered experience of self‐voice feedback in auditory verbal hallucinations.
Collapse
Affiliation(s)
- Joseph F Johnson
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Michel Belyk
- Division of Psychology and Language Sciences, University College London, London, UK
| | - Michael Schwartze
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands
| | - Ana P Pinheiro
- Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| | - Sonja A Kotz
- Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, the Netherlands.,Department of Neuropsychology, Max Planck Institute for Human and Cognitive Sciences, Leipzig, Germany
| |
Collapse
|
228
|
Yoshioka TW, Doi T, Abdolrahmani M, Fujita I. Specialized contributions of mid-tier stages of dorsal and ventral pathways to stereoscopic processing in macaque. eLife 2021; 10:58749. [PMID: 33625356 PMCID: PMC7959693 DOI: 10.7554/elife.58749] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 02/18/2021] [Indexed: 11/22/2022] Open
Abstract
The division of labor between the dorsal and ventral visual pathways has been well studied, but not often with direct comparison at the single-neuron resolution with matched stimuli. Here we directly compared how single neurons in MT and V4, mid-tier areas of the two pathways, process binocular disparity, a powerful cue for 3D perception and actions. We found that MT neurons transmitted disparity signals more quickly and robustly, whereas V4 or its upstream neurons transformed the signals into sophisticated representations more prominently. Therefore, signaling speed and robustness were traded for transformation between the dorsal and ventral pathways. The key factor in this tradeoff was disparity-tuning shape: V4 neurons had more even-symmetric tuning than MT neurons. Moreover, the tuning symmetry predicted the degree of signal transformation across neurons similarly within each area, implying a general role of tuning symmetry in the stereoscopic processing by the two pathways.
Collapse
Affiliation(s)
- Toshihide W Yoshioka
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, SuitaOsaka, Japan.,Center for Information and Neural Networks, Osaka University and National Institute of Information and Communications Technology, SuitaOsaka, Japan
| | - Takahiro Doi
- Department of Psychology, University of Pennsylvania, Philadelphia, United States
| | - Mohammad Abdolrahmani
- Laboratory for Neural Circuits and Behavior, RIKEN Center for Brain Science (CBS), Wako, Japan
| | - Ichiro Fujita
- Laboratory for Cognitive Neuroscience, Graduate School of Frontier Biosciences, Osaka University, SuitaOsaka, Japan.,Center for Information and Neural Networks, Osaka University and National Institute of Information and Communications Technology, SuitaOsaka, Japan
| |
Collapse
|
229
|
Cerebral white matter connectivity, cognition, and age-related macular degeneration. NEUROIMAGE-CLINICAL 2021; 30:102594. [PMID: 33662707 PMCID: PMC7930609 DOI: 10.1016/j.nicl.2021.102594] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/16/2020] [Revised: 02/05/2021] [Accepted: 02/06/2021] [Indexed: 12/24/2022]
Abstract
Age-related macular degeneration (AMD) is a common retina disease associated with cognitive impairment in older adults. The mechanism(s) that account for the link between AMD and cognitive decline remain unclear. Here we aim to shed light on this issue by investigating whether relationships between cognition and white matter in the brain differ by AMD status. In a direct group comparison of brain connectometry maps from diffusion weighted images, AMD patients showed significantly weaker quantitative anisotropy (QA) than healthy controls, predominantly in the splenium and left optic radiation. The QA of these tracts, however, did not correlate with the visual acuity measure, indicating that this group effect is not directly driven by visual loss. The AMD and control groups did not differ significantly in cognitive performance.Across all participants, better cognitive performance (e.g. verbal fluency) is associated with stronger connectivity strength in white matter tracts including the splenium and the left inferior fronto-occipital fasciculus/inferior longitudinal fasciculus. However, there were significant interactions between group and cognitive performance (verbal fluency, memory), suggesting that the relation between QA and cognitive performance was weaker in AMD patients than in controls.This may be explained by unmeasured determinants of performance that are more common or impactful in AMD or by a recruitment bias whereby the AMD group had higher cognitive reserve. In general, our findings suggest that neural degeneration in the brain might occur in parallel to AMD in the eyes, although the participants studied here do not (yet) exhibit overt cognitive declines per standard assessments.
Collapse
|
230
|
Overath T, Paik JH. From acoustic to linguistic analysis of temporal speech structure: Acousto-linguistic transformation during speech perception using speech quilts. Neuroimage 2021; 235:117887. [PMID: 33617990 PMCID: PMC8246445 DOI: 10.1016/j.neuroimage.2021.117887] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2019] [Revised: 01/18/2021] [Accepted: 02/15/2021] [Indexed: 11/22/2022] Open
Abstract
Speech perception entails the mapping of the acoustic waveform to linguistic representations. For this transformation to succeed, the speech signal needs to be tracked over various temporal windows at high temporal precision in order to decode linguistic units ranging from phonemes (tens of milliseconds) to sentences (seconds). Here, we tested the hypothesis that cortical processing of speech-specific temporal structure is modulated by higher-level linguistic analysis. Using fMRI, we measured BOLD signal changes to 4 s long speech quilts with variable temporal structure (30, 120, 480, 960 ms segment lengths), as well as natural speech, created from a familiar (English) or foreign (Korean) language. We found evidence for the acoustic analysis of temporal speech properties in superior temporal sulcus (STS): the BOLD signal increased as a function of temporal speech structure in both familiar and foreign languages. However, activity in left inferior gyrus (IFG) revealed evidence for linguistic processing of temporal speech properties: the BOLD signal increased as a function of temporal speech structure only in familiar, but not in foreign speech. Network connectivity analyses suggested that left IFG modulates the processing of temporal speech structure in primary and non-primary auditory cortex, which in turn sensitizes the analysis of temporal speech structure in STS. The results thus suggest that acousto-linguistic transformation of temporal speech structure is achieved by a cortical network comprising primary and non-primary auditory cortex, STS, and left IFG.
Collapse
Affiliation(s)
- Tobias Overath
- Department of Psychology & Neuroscience, Duke University, Durham, North Carolina, 27708, U.S.A.; Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, 27708, U.S.A.; Center for Cognitive Neuroscience, Duke University, Durham, North Carolina, 27708, U.S.A..
| | - Joon H Paik
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina, 27708, U.S.A
| |
Collapse
|
231
|
Memory Load Alters Perception-Related Neural Oscillations during Multisensory Integration. J Neurosci 2021; 41:1505-1515. [PMID: 33310755 DOI: 10.1523/jneurosci.1397-20.2020] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 10/02/2020] [Accepted: 10/29/2020] [Indexed: 12/16/2022] Open
Abstract
Integrating information across different senses is a central feature of human perception. Previous research suggests that multisensory integration is shaped by a context-dependent and largely adaptive interplay between stimulus-driven bottom-up and top-down endogenous influences. One critical question concerns the extent to which this interplay is sensitive to the amount of available cognitive resources. In the present study, we investigated the influence of limited cognitive resources on audiovisual integration by measuring high-density electroencephalography (EEG) in healthy participants performing the sound-induced flash illusion (SIFI) and a verbal n-back task (0-back, low load and 2-back, high load) in a dual-task design. In the SIFI, the integration of a flash with two rapid beeps can induce the illusory perception of two flashes. We found that high compared with low load increased illusion susceptibility and modulated neural oscillations underlying illusion-related crossmodal interactions. Illusion perception under high load was associated with reduced early β power (18-26 Hz, ∼70 ms) in auditory and motor areas, presumably reflecting an early mismatch signal and subsequent top-down influences including increased frontal θ power (7-9 Hz, ∼120 ms) in mid-anterior cingulate cortex (ACC) and a later β power suppression (13-22 Hz, ∼350 ms) in prefrontal and auditory cortex. Our study demonstrates that integrative crossmodal interactions underlying the SIFI are sensitive to the amount of available cognitive resources and that multisensory integration engages top-down θ and β oscillations when cognitive resources are scarce.SIGNIFICANCE STATEMENT The integration of information across multiple senses, a remarkable ability of our perceptual system, is influenced by multiple context-related factors, the role of which is highly debated. It is, for instance, poorly understood how available cognitive resources influence crossmodal interactions during multisensory integration. We addressed this question using the sound-induced flash illusion (SIFI), a phenomenon in which the integration of two rapid beeps together with a flash induces the illusion of a second flash. Replicating our previous work, we demonstrate that depletion of cognitive resources through a working memory (WM) task increases the perception of the illusion. With respect to the underlying neural processes, we show that when available resources are limited, multisensory integration engages top-down θ and β oscillations.
Collapse
|
232
|
Balasubramaniam R, Haegens S, Jazayeri M, Merchant H, Sternad D, Song JH. Neural Encoding and Representation of Time for Sensorimotor Control and Learning. J Neurosci 2021; 41:866-872. [PMID: 33380468 PMCID: PMC7880297 DOI: 10.1523/jneurosci.1652-20.2020] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Revised: 11/10/2020] [Accepted: 11/12/2020] [Indexed: 11/21/2022] Open
Abstract
The ability to perceive and produce movements in the real world with precise timing is critical for survival in animals, including humans. However, research on sensorimotor timing has rarely considered the tight interrelation between perception, action, and cognition. In this review, we present new evidence from behavioral, computational, and neural studies in humans and nonhuman primates, suggesting a pivotal link between sensorimotor control and temporal processing, as well as describing new theoretical frameworks regarding timing in perception and action. We first discuss the link between movement coordination and interval-based timing by addressing how motor training develops accurate spatiotemporal patterns in behavior and influences the perception of temporal intervals. We then discuss how motor expertise results from establishing task-relevant neural manifolds in sensorimotor cortical areas and how the geometry and dynamics of these manifolds help reduce timing variability. We also highlight how neural dynamics in sensorimotor areas are involved in beat-based timing. These lines of research aim to extend our understanding of how timing arises from and contributes to perceptual-motor behaviors in complex environments to seamlessly interact with other cognitive processes.
Collapse
Affiliation(s)
| | | | | | - Hugo Merchant
- Instituto de Neurobiologia, UNAM, campus Juriquilla, Querétaro, México 76230
| | | | | |
Collapse
|
233
|
An H, Ho Kei S, Auksztulewicz R, Schnupp JWH. Do Auditory Mismatch Responses Differ Between Acoustic Features? Front Hum Neurosci 2021; 15:613903. [PMID: 33597853 PMCID: PMC7882487 DOI: 10.3389/fnhum.2021.613903] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2020] [Accepted: 01/07/2021] [Indexed: 11/13/2022] Open
Abstract
Mismatch negativity (MMN) is the electroencephalographic (EEG) waveform obtained by subtracting event-related potential (ERP) responses evoked by unexpected deviant stimuli from responses evoked by expected standard stimuli. While the MMN is thought to reflect an unexpected change in an ongoing, predictable stimulus, it is unknown whether MMN responses evoked by changes in different stimulus features have different magnitudes, latencies, and topographies. The present study aimed to investigate whether MMN responses differ depending on whether sudden stimulus change occur in pitch, duration, location or vowel identity, respectively. To calculate ERPs to standard and deviant stimuli, EEG signals were recorded in normal-hearing participants (N = 20; 13 males, 7 females) who listened to roving oddball sequences of artificial syllables. In the roving paradigm, any given stimulus is repeated several times to form a standard, and then suddenly replaced with a deviant stimulus which differs from the standard. Here, deviants differed from preceding standards along one of four features (pitch, duration, vowel or interaural level difference). The feature levels were individually chosen to match behavioral discrimination performance. We identified neural activity evoked by unexpected violations along all four acoustic dimensions. Evoked responses to deviant stimuli increased in amplitude relative to the responses to standard stimuli. A univariate (channel-by-channel) analysis yielded no significant differences between MMN responses following violations of different features. However, in a multivariate analysis (pooling information from multiple EEG channels), acoustic features could be decoded from the topography of mismatch responses, although at later latencies than those typical for MMN. These results support the notion that deviant feature detection may be subserved by a different process than general mismatch detection.
Collapse
Affiliation(s)
- HyunJung An
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Shing Ho Kei
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| | - Ryszard Auksztulewicz
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong.,Department of Neuroscience, Max Planck Institute for Empirical Aesthetics, Frankfurt, Germany
| | - Jan W H Schnupp
- Department of Neuroscience, City University of Hong Kong, Kowloon, Hong Kong
| |
Collapse
|
234
|
Michaelis K, Miyakoshi M, Norato G, Medvedev AV, Turkeltaub PE. Motor engagement relates to accurate perception of phonemes and audiovisual words, but not auditory words. Commun Biol 2021; 4:108. [PMID: 33495548 PMCID: PMC7835217 DOI: 10.1038/s42003-020-01634-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2020] [Accepted: 12/15/2020] [Indexed: 11/12/2022] Open
Abstract
A longstanding debate has surrounded the role of the motor system in speech perception, but progress in this area has been limited by tasks that only examine isolated syllables and conflate decision-making with perception. Using an adaptive task that temporally isolates perception from decision-making, we examined an EEG signature of motor activity (sensorimotor μ/beta suppression) during the perception of auditory phonemes, auditory words, audiovisual words, and environmental sounds while holding difficulty constant at two levels (Easy/Hard). Results revealed left-lateralized sensorimotor μ/beta suppression that was related to perception of speech but not environmental sounds. Audiovisual word and phoneme stimuli showed enhanced left sensorimotor μ/beta suppression for correct relative to incorrect trials, while auditory word stimuli showed enhanced suppression for incorrect trials. Our results demonstrate that motor involvement in perception is left-lateralized, is specific to speech stimuli, and it not simply the result of domain-general processes. These results provide evidence for an interactive network for speech perception in which dorsal stream motor areas are dynamically engaged during the perception of speech depending on the characteristics of the speech signal. Crucially, this motor engagement has different effects on the perceptual outcome depending on the lexicality and modality of the speech stimulus. Michaelis et al. used extra-cranial EEG during a forced-choice identification task to investigate the role of the motor system in speech perception. Their findings suggest that left hemisphere dorsal stream motor areas are dynamically engaged during speech perception based on the properties of the stimulus.
Collapse
Affiliation(s)
- Kelly Michaelis
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA.,Human Cortical Physiology and Stroke Neurorehabilitation Section, National Institute for Neurological Disorders and Stroke (NINDS), National Institutes of Health, Bethesda, MD, USA
| | - Makoto Miyakoshi
- Swartz Center for Computational Neuroscience, Institute for Neural Computation, University of California San Diego, San Diego, CA, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Andrei V Medvedev
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA
| | - Peter E Turkeltaub
- Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington, DC, USA. .,Research Division, Medstar National Rehabilitation Hospital, Washington, DC, USA.
| |
Collapse
|
235
|
An auditory hand-proximity effect: The auditory Simon effect is enhanced near the hands. Psychon Bull Rev 2021; 28:853-861. [PMID: 33469849 DOI: 10.3758/s13423-020-01860-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2020] [Indexed: 11/08/2022]
Abstract
Visual processing near the hands is altered compared with stimuli far from the hands. Here, we aimed to test whether this alteration can be found in auditory processing. Participants were required to perform an auditory Simon task either with their hands close to the loudspeakers or far from the loudspeakers. Two experiments consistently showed that the auditory Simon effect was enhanced when the hands were close to the speakers compared with far from the speakers. This is consistent with previous findings of an enhanced visual Simon effect near the hands. Furthermore, the hand-proximity effects in auditory and visual Simon tasks (an enhanced Simon effect near hands compared with far from hands) were comparable, indicating hand-proximity effect is reliable across visual and auditory modalities. Thus, the present study extended the hand-proximity effect from vision to audition by showing that the auditory Simon effect was enhanced near the hands compared with far from the hands.
Collapse
|
236
|
Krishna S, Kakaizada S, Almeida N, Brang D, Hervey-Jumper S. Central Nervous System Plasticity Influences Language and Cognitive Recovery in Adult Glioma. Neurosurgery 2021; 89:539-548. [PMID: 33476391 DOI: 10.1093/neuros/nyaa456] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Accepted: 08/05/2020] [Indexed: 01/01/2023] Open
Abstract
Gliomas exist within the framework of complex neuronal circuitry in which network dynamics influence both tumor biology and cognition. The generalized impairment of cognition or loss of language function is a common occurrence for glioma patients. The interface between intrinsic brain tumors such as gliomas and functional cognitive networks are poorly understood. The ability to communicate effectively is critically important for receiving oncological therapies and maintaining a high quality of life. Although the propensity of gliomas to infiltrate cortical and subcortical structures and disrupt key anatomic language pathways is well documented, there is new evidence offering insight into the network and cellular mechanisms underpinning glioma-related aphasia and aphasia recovery. In this review, we will outline the current understanding of the mechanisms of cognitive dysfunction and recovery, using aphasia as an illustrative model.
Collapse
Affiliation(s)
- Saritha Krishna
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, California
| | - Sofia Kakaizada
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, California
| | - Nyle Almeida
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, California
| | - David Brang
- Department of Psychology, University of Michigan, Ann Arbor, Michigan
| | - Shawn Hervey-Jumper
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, California
| |
Collapse
|
237
|
Bidelman GM, Pearson C, Harrison A. Lexical Influences on Categorical Speech Perception Are Driven by a Temporoparietal Circuit. J Cogn Neurosci 2021; 33:840-852. [PMID: 33464162 DOI: 10.1162/jocn_a_01678] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Categorical judgments of otherwise identical phonemes are biased toward hearing words (i.e., "Ganong effect") suggesting lexical context influences perception of even basic speech primitives. Lexical biasing could manifest via late stage postperceptual mechanisms related to decision or, alternatively, top-down linguistic inference that acts on early perceptual coding. Here, we exploited the temporal sensitivity of EEG to resolve the spatiotemporal dynamics of these context-related influences on speech categorization. Listeners rapidly classified sounds from a /gɪ/-/kɪ/ gradient presented in opposing word-nonword contexts (GIFT-kift vs. giss-KISS), designed to bias perception toward lexical items. Phonetic perception shifted toward the direction of words, establishing a robust Ganong effect behaviorally. ERPs revealed a neural analog of lexical biasing emerging within ~200 msec. Source analyses uncovered a distributed neural network supporting the Ganong including middle temporal gyrus, inferior parietal lobe, and middle frontal cortex. Yet, among Ganong-sensitive regions, only left middle temporal gyrus and inferior parietal lobe predicted behavioral susceptibility to lexical influence. Our findings confirm lexical status rapidly constrains sublexical categorical representations for speech within several hundred milliseconds but likely does so outside the purview of canonical auditory-sensory brain areas.
Collapse
Affiliation(s)
- Gavin M Bidelman
- University of Memphis, TN.,University of Tennessee Health Sciences Center, Memphis, TN
| | | | | |
Collapse
|
238
|
Vaquero L, Ramos-Escobar N, Cucurell D, François C, Putkinen V, Segura E, Huotilainen M, Penhune V, Rodríguez-Fornells A. Arcuate fasciculus architecture is associated with individual differences in pre-attentive detection of unpredicted music changes. Neuroimage 2021; 229:117759. [PMID: 33454403 DOI: 10.1016/j.neuroimage.2021.117759] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2020] [Revised: 12/16/2020] [Accepted: 01/06/2021] [Indexed: 12/12/2022] Open
Abstract
The mismatch negativity (MMN) is an event related brain potential (ERP) elicited by unpredicted sounds presented in a sequence of repeated auditory stimuli. The neural sources of the MMN have been previously attributed to a fronto-temporo-parietal network which crucially overlaps with the so-called auditory dorsal stream, involving inferior and middle frontal, inferior parietal, and superior and middle temporal regions. These cortical areas are structurally connected by the arcuate fasciculus (AF), a three-branch pathway supporting the feedback-feedforward loop involved in auditory-motor integration, auditory working memory, storage of acoustic templates, as well as comparison and update of those templates. Here, we characterized the individual differences in the white-matter macrostructural properties of the AF and explored their link to the electrophysiological marker of passive change detection gathered in a melodic multifeature MMN-EEG paradigm in 26 healthy young adults without musical training. Our results show that left fronto-temporal white-matter connectivity plays an important role in the pre-attentive detection of rhythm modulations within a melody. Previous studies have shown that this AF segment is also critical for language processing and learning. This strong coupling between structure and function in auditory change detection might be related to life-time linguistic (and possibly musical) exposure and experiences, as well as to timing processing specialization of the left auditory cortex. To the best of our knowledge, this is the first time in which the relationship between neurophysiological (EEG) and brain white-matter connectivity indexes using DTI-tractography are studied together. Thus, the present results, although still exploratory, add to the existing evidence on the importance of studying the constraints imposed on cognitive functions by the underlying structural connectivity.
Collapse
Affiliation(s)
- Lucía Vaquero
- Laboratory of Cognitive and Computational Neuroscience, Complutense University of Madrid and Polytechnic University of Madrid, Campus Científico y Tecnológico de la UPM, Pozuelo de Alarcón, 28223 Madrid, Spain.
| | - Neus Ramos-Escobar
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - David Cucurell
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Clément François
- Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Aix Marseille Univ, CNRS, LPL, Aix-en-Provence, France
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland
| | - Emma Segura
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain
| | - Minna Huotilainen
- Cicero Learning and Cognitive Brain Research Unit, University of Helsinki, Helsinki, Finland
| | - Virginia Penhune
- Penhune Laboratory for Motor Learning and Neural Plasticity, Concordia University, Montreal, QC, Canada; International Laboratory for Brain, Music and Sound Research (BRAMS). Montreal, QC, Canada; Center for Research on Brain, Language and Music (CRBLM), McGill University. Montreal, QC, Canada
| | - Antoni Rodríguez-Fornells
- Department of Cognition, Development and Education Psychology, and Institute of Neurosciences, University of Barcelona, Barcelona, Spain; Cognition and Brain Plasticity Unit, Bellvitge Biomedical Research Institute (IDIBELL). L'Hospitalet de Llobregat, Barcelona, Spain; Institució Catalana de recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
239
|
Lin IF, Itahashi T, Kashino M, Kato N, Hashimoto RI. Brain activations while processing degraded speech in adults with autism spectrum disorder. Neuropsychologia 2021; 152:107750. [PMID: 33417913 DOI: 10.1016/j.neuropsychologia.2021.107750] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 12/14/2020] [Accepted: 12/31/2020] [Indexed: 11/17/2022]
Abstract
Individuals with autism spectrum disorder (ASD) are found to have difficulties in understanding speech in adverse conditions. In this study, we used noise-vocoded speech (VS) to investigate neural processing of degraded speech in individuals with ASD. We ran fMRI experiments in the ASD group and a typically developed control (TDC) group while they listened to clear speech (CS), VS, and spectrally rotated VS (SRVS), and they were requested to pay attention to the heard sentence and answer whether it was intelligible or not. The VS used in this experiment was spectrally degraded but still intelligible, but the SRVS was unintelligible. We recruited 21 right-handed adult males with ASD and 24 age-matched and right-handed male TDC participants for this experiment. Compared with the TDC group, we observed reduced functional connectivity (FC) between the left dorsal premotor cortex and left temporoparietal junction in the ASD group for the effect of task difficulty in speech processing, computed as VS-(CS + SRVS)/2. Furthermore, the observed reduced FC was negatively correlated with their Autism-Spectrum Quotient scores. This observation supports our hypothesis that the disrupted dorsal stream for attentive process of degraded speech in individuals with ASD might be related to their difficulty in understanding speech in adverse conditions.
Collapse
Affiliation(s)
- I-Fan Lin
- Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0124, Japan; Department of Medicine, Taipei Medical University, Taipei, Taiwan, 11031; Department of Occupational Medicine, Shuang Ho Hospital, New Taipei City, Taiwan, 23561.
| | - Takashi Itahashi
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan
| | - Makio Kashino
- Communication Science Laboratories, NTT Corporation, Atsugi, Kanagawa, 243-0124, Japan; School of Engineering, Tokyo Institute of Technology, Yokohama, 226-8503, Japan; Graduate School of Education, University of Tokyo, Tokyo, 113-0033, Japan
| | - Nobumasa Kato
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan
| | - Ryu-Ichiro Hashimoto
- Medical Institute of Developmental Disabilities Research, Showa University Karasuyama Hospital, Tokyo, 157-8577, Japan; Department of Language Sciences, Tokyo Metropolitan University, Tokyo, 192-0364, Japan.
| |
Collapse
|
240
|
Luthra S. The Role of the Right Hemisphere in Processing Phonetic Variability Between Talkers. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2021; 2:138-151. [PMID: 37213418 PMCID: PMC10174361 DOI: 10.1162/nol_a_00028] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 11/13/2020] [Indexed: 05/23/2023]
Abstract
Neurobiological models of speech perception posit that both left and right posterior temporal brain regions are involved in the early auditory analysis of speech sounds. However, frank deficits in speech perception are not readily observed in individuals with right hemisphere damage. Instead, damage to the right hemisphere is often associated with impairments in vocal identity processing. Herein lies an apparent paradox: The mapping between acoustics and speech sound categories can vary substantially across talkers, so why might right hemisphere damage selectively impair vocal identity processing without obvious effects on speech perception? In this review, I attempt to clarify the role of the right hemisphere in speech perception through a careful consideration of its role in processing vocal identity. I review evidence showing that right posterior superior temporal, right anterior superior temporal, and right inferior / middle frontal regions all play distinct roles in vocal identity processing. In considering the implications of these findings for neurobiological accounts of speech perception, I argue that the recruitment of right posterior superior temporal cortex during speech perception may specifically reflect the process of conditioning phonetic identity on talker information. I suggest that the relative lack of involvement of other right hemisphere regions in speech perception may be because speech perception does not necessarily place a high burden on talker processing systems, and I argue that the extant literature hints at potential subclinical impairments in the speech perception abilities of individuals with right hemisphere damage.
Collapse
|
241
|
Feng G, Gan Z, Llanos F, Meng D, Wang S, Wong PCM, Chandrasekaran B. A distributed dynamic brain network mediates linguistic tone representation and categorization. Neuroimage 2021; 224:117410. [PMID: 33011415 PMCID: PMC7749825 DOI: 10.1016/j.neuroimage.2020.117410] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2020] [Revised: 08/21/2020] [Accepted: 09/25/2020] [Indexed: 12/21/2022] Open
Abstract
Successful categorization requires listeners to represent the incoming sensory information, resolve the "blooming, buzzing confusion" inherent to noisy sensory signals, and leverage the accumulated evidence towards making a decision. Despite decades of intense debate, the neural systems underlying speech categorization remain unresolved. Here we assessed the neural representation and categorization of lexical tones by native Mandarin speakers (N = 31) across a range of acoustic and contextual variabilities (talkers, perceptual saliences, and stimulus-contexts) using functional magnetic imaging (fMRI) and an evidence accumulation model of decision-making. Univariate activation and multivariate pattern analyses reveal that the acoustic-variability-tolerant representations of tone category are observed within the middle portion of the left superior temporal gyrus (STG). Activation patterns in the frontal and parietal regions also contained category-relevant information that was differentially sensitive to various forms of variability. The robustness of neural representations of tone category in a distributed fronto-temporoparietal network is associated with trial-by-trial decision-making parameters. These findings support a hybrid model involving a representational core within the STG that operates dynamically within an extensive frontoparietal network to support the representation and categorization of linguistic pitch patterns.
Collapse
Affiliation(s)
- Gangyi Feng
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China.
| | - Zhenzhong Gan
- Center for the Study of Applied Psychology and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Fernando Llanos
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, United States
| | - Danting Meng
- Center for the Study of Applied Psychology and School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Suiping Wang
- Center for the Study of Applied Psychology and School of Psychology, South China Normal University, Guangzhou 510631, China; Guangdong Provincial Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou 510631, China
| | - Patrick C M Wong
- Department of Linguistics and Modern Languages, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China; Brain and Mind Institute, The Chinese University of Hong Kong, Shatin, N.T., Hong Kong SAR, China
| | - Bharath Chandrasekaran
- Department of Communication Science and Disorders, School of Health and Rehabilitation Sciences, University of Pittsburgh, Pittsburgh, PA 15260, United States.
| |
Collapse
|
242
|
Hula WD, Panesar S, Gravier ML, Yeh FC, Dresang HC, Dickey MW, Fernandez-Miranda JC. Structural white matter connectometry of word production in aphasia: an observational study. Brain 2020; 143:2532-2544. [PMID: 32705146 DOI: 10.1093/brain/awaa193] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2020] [Revised: 04/12/2020] [Accepted: 04/22/2020] [Indexed: 11/15/2022] Open
Abstract
While current dual-steam neurocognitive models of language function have coalesced around the view that distinct neuroanatomical networks subserve semantic and phonological processing, respectively, the specific white matter components of these networks remain a matter of debate. To inform this debate, we investigated relationships between structural white matter connectivity and word production in a cross-sectional study of 42 participants with aphasia due to unilateral left hemisphere stroke. Specifically, we reconstructed a local connectome matrix for each participant from diffusion spectrum imaging data and regressed these matrices on indices of semantic and phonological ability derived from their responses to a picture-naming test and a computational model of word production. These connectometry analyses indicated that both dorsally located (arcuate fasciculus) and ventrally located (inferior frontal-occipital, uncinate, and middle longitudinal fasciculi) tracts were associated with semantic ability, while associations with phonological ability were more dorsally situated, including the arcuate and middle longitudinal fasciculi. Associations with limbic pathways including the posterior cingulum bundle and the fornix were also found. All analyses controlled for total lesion volume and all results showing positive associations obtained false discovery rates < 0.05. These results challenge dual-stream accounts that deny a role for the arcuate fasciculus in semantic processing, and for ventral-stream pathways in language production. They also illuminate limbic contributions to both semantic and phonological processing for word production.
Collapse
Affiliation(s)
- William D Hula
- Geriatric Research, Education, and Clinical Center and Audiology and Speech Pathology Service, VA Pittsburgh Healthcare System, Pittsburgh PA, USA.,Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh PA, USA
| | - Sandip Panesar
- Department of Neurosurgery, Stanford University, Palo Alto, CA, USA
| | - Michelle L Gravier
- Department of Speech, Language, and Hearing Sciences, California State East Bay, Hayward, CA, USA
| | - Fang-Cheng Yeh
- Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Haley C Dresang
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh PA, USA
| | - Michael Walsh Dickey
- Geriatric Research, Education, and Clinical Center and Audiology and Speech Pathology Service, VA Pittsburgh Healthcare System, Pittsburgh PA, USA.,Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh PA, USA
| | | |
Collapse
|
243
|
Dietziker J, Staib M, Frühholz S. Neural competition between concurrent speech production and other speech perception. Neuroimage 2020; 228:117710. [PMID: 33385557 DOI: 10.1016/j.neuroimage.2020.117710] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2020] [Revised: 11/28/2020] [Accepted: 12/19/2020] [Indexed: 10/22/2022] Open
Abstract
Understanding others' speech while individuals simultaneously produce speech utterances implies neural competition and requires specific mechanisms for a neural resolution given that previous studies proposed opposing signal dynamics for both processes in the auditory cortex (AC). We here used neuroimaging in humans to investigate this neural competition by lateralized stimulations with other speech samples and ipsilateral or contralateral lateralized feedback of actively produced self speech utterances in the form of various speech vowels. In experiment 1, we show, first, that others' speech classifications during active self speech lead to activity in the planum temporale (PTe) when both self and other speech samples were presented together to only the left or right ear. The contralateral PTe also seemed to indifferently respond to single self and other speech samples. Second, specific activity in the left anterior superior temporal cortex (STC) was found during dichotic stimulations (i.e. self and other speech presented to separate ears). Unlike previous studies, this left anterior STC activity supported self speech rather than other speech processing. Furthermore, right mid and anterior STC was more involved in other speech processing. These results signify specific mechanisms for self and other speech processing in the left and right STC beyond a more general speech processing in PTe. Third, other speech recognition in the context of listening to recorded self speech in experiment 2 led to largely symmetric activity in STC and additionally in inferior frontal subregions. The latter was previously reported to be generally relevant for other speech perception and classification, but we found frontal activity only when other speech classification was challenged by recorded but not by active self speech samples. Altogether, unlike formerly established brain networks for uncompetitive other speech perception, active self speech during other speech perception seemingly leads to a neural reordering, functional reassignment, and unusual lateralization of AC and frontal brain activations.
Collapse
Affiliation(s)
- Joris Dietziker
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland.
| | - Matthias Staib
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Cognitive and Affective Neuroscience Unit, University of Zurich, Zurich, Switzerland; Neuroscience Center Zurich, University of Zurich and ETH Zurich, Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution (ISLE), University of Zurich, Switzerland; Department of Psychology, University of Oslo, Norway.
| |
Collapse
|
244
|
Roth RH, Ding JB. From Neurons to Cognition: Technologies for Precise Recording of Neural Activity Underlying Behavior. BME FRONTIERS 2020; 2020:7190517. [PMID: 37849967 PMCID: PMC10521756 DOI: 10.34133/2020/7190517] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 10/27/2020] [Indexed: 10/19/2023] Open
Abstract
Understanding how brain activity encodes information and controls behavior is a long-standing question in neuroscience. This complex problem requires converging efforts from neuroscience and engineering, including technological solutions to perform high-precision and large-scale recordings of neuronal activity in vivo as well as unbiased methods to reliably measure and quantify behavior. Thanks to advances in genetics, molecular biology, engineering, and neuroscience, in recent decades, a variety of optical imaging and electrophysiological approaches for recording neuronal activity in awake animals have been developed and widely applied in the field. Moreover, sophisticated computer vision and machine learning algorithms have been developed to analyze animal behavior. In this review, we provide an overview of the current state of technology for neuronal recordings with a focus on optical and electrophysiological methods in rodents. In addition, we discuss areas that future technological development will need to cover in order to further our understanding of the neural activity underlying behavior.
Collapse
Affiliation(s)
- Richard H Roth
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
| | - Jun B Ding
- Department of Neurosurgery, Stanford University, Stanford, CA 94305, USA
- Department of Neurology and Neurological Sciences, Stanford University, Stanford, CA 94305, USA
| |
Collapse
|
245
|
Tremblay P, Brisson V, Deschamps I. Brain aging and speech perception: Effects of background noise and talker variability. Neuroimage 2020; 227:117675. [PMID: 33359849 DOI: 10.1016/j.neuroimage.2020.117675] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2020] [Revised: 12/15/2020] [Accepted: 12/17/2020] [Indexed: 10/22/2022] Open
Abstract
Speech perception can be challenging, especially for older adults. Despite the importance of speech perception in social interactions, the mechanisms underlying these difficulties remain unclear and treatment options are scarce. While several studies have suggested that decline within cortical auditory regions may be a hallmark of these difficulties, a growing number of studies have reported decline in regions beyond the auditory processing network, including regions involved in speech processing and executive control, suggesting a potentially diffuse underlying neural disruption, though no consensus exists regarding underlying dysfunctions. To address this issue, we conducted two experiments in which we investigated age differences in speech perception when background noise and talker variability are manipulated, two factors known to be detrimental to speech perception. In Experiment 1, we examined the relationship between speech perception, hearing and auditory attention in 88 healthy participants aged 19 to 87 years. In Experiment 2, we examined cortical thickness and BOLD signal using magnetic resonance imaging (MRI) and related these measures to speech perception performance using a simple mediation approach in 32 participants from Experiment 1. Our results show that, even after accounting for hearing thresholds and two measures of auditory attention, speech perception significantly declined with age. Age-related decline in speech perception in noise was associated with thinner cortex in auditory and speech processing regions (including the superior temporal cortex, ventral premotor cortex and inferior frontal gyrus) as well as in regions involved in executive control (including the dorsal anterior insula, the anterior cingulate cortex and medial frontal cortex). Further, our results show that speech perception performance was associated with reduced brain response in the right superior temporal cortex in older compared to younger adults, and to an increase in response to noise in older adults in the left anterior temporal cortex. Talker variability was not associated with different activation patterns in older compared to younger adults. Together, these results support the notion of a diffuse rather than a focal dysfunction underlying speech perception in noise difficulties in older adults.
Collapse
Affiliation(s)
- Pascale Tremblay
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada.
| | - Valérie Brisson
- CERVO Brain Research Center, Québec City, QC, Canada; Université Laval, Département de réadaptation, Québec City, QC, Canada
| | | |
Collapse
|
246
|
Meekings S, Scott SK. Error in the Superior Temporal Gyrus? A Systematic Review and Activation Likelihood Estimation Meta-Analysis of Speech Production Studies. J Cogn Neurosci 2020; 33:422-444. [PMID: 33326327 DOI: 10.1162/jocn_a_01661] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Evidence for perceptual processing in models of speech production is often drawn from investigations in which the sound of a talker's voice is altered in real time to induce "errors." Methods of acoustic manipulation vary but are assumed to engage the same neural network and psychological processes. This paper aims to review fMRI and PET studies of altered auditory feedback and assess the strength of the evidence these studies provide for a speech error correction mechanism. Studies included were functional neuroimaging studies of speech production in neurotypical adult humans, using natural speech errors or one of three predefined speech manipulation techniques (frequency altered feedback, delayed auditory feedback, and masked auditory feedback). Seventeen studies met the inclusion criteria. In a systematic review, we evaluated whether each study (1) used an ecologically valid speech production task, (2) controlled for auditory activation caused by hearing the perturbation, (3) statistically controlled for multiple comparisons, and (4) measured behavioral compensation correlating with perturbation. None of the studies met all four criteria. We then conducted an activation likelihood estimation meta-analysis of brain coordinates from 16 studies that reported brain responses to manipulated over unmanipulated speech feedback, using the GingerALE toolbox. These foci clustered in bilateral superior temporal gyri, anterior to cortical fields typically linked to error correction. Within the limits of our analysis, we conclude that existing neuroimaging evidence is insufficient to determine whether error monitoring occurs in the posterior superior temporal gyrus regions proposed by models of speech production.
Collapse
|
247
|
Zhang F, Hua B, Wang T, Wang M, Ding ZX, Ding JR. Abnormal amplitude of spontaneous low-frequency fluctuation in children with growth hormone deficiency: A resting-state functional magnetic resonance imaging study. Neurosci Lett 2020; 742:135546. [PMID: 33290838 DOI: 10.1016/j.neulet.2020.135546] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2020] [Revised: 11/11/2020] [Accepted: 11/28/2020] [Indexed: 10/22/2022]
Abstract
Growth hormone deficiency (GHD) is a developmental disorder caused by the partial or complete deficiency of growth hormone secreted by the pituitary gland, or its receptor. Patients with GHD are characterized by short stature, slow growth, and certain cognitive and behavioral abnormalities. Previous behavioral and neuroimaging studies indicate that GHD might affect the brain functional activity associated with cognitive and behavioral abilities. We thus investigated the spontaneous neural activity in children with GHD using amplitude of low-frequency fluctuation (ALFF) analysis. ALFF was calculated based on resting-state functional magnetic resonance imaging (rs-fMRI) data in 26 children with GHD and 15 age- and sex-matched healthy controls (HCs). Comparative analysis revealed that the ALFF of the right lingual gyrus and angular gyrus were significantly increased, while the ALFF of the right dorsolateral superior frontal gyrus, the left postcentral gyrus, superior parietal gyrus and middle temporal gyrus were significantly decreased in children with GHD relative to HCs. These findings support the presence of abnormal brain functional activity in children with GHD, which may account for the abnormal cognition and behavior, such as aggression, somatic complaints, attention deficits, and language withdrawal. This study provides imaging evidence for future studies on the pathophysiological mechanisms of abnormal behavior and cognition in children with GHD.
Collapse
Affiliation(s)
- Fanyu Zhang
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong, China; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, China
| | - Bo Hua
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong, China; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, China
| | - Tengfei Wang
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong, China; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, China
| | - Mei Wang
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, China
| | - Zhong Xiang Ding
- Department of Radiology, Affiliated Hangzhou First People's Hospital, Zhejiang University School of Medicine, Hangzhou, China.
| | - Ju-Rong Ding
- Artificial Intelligence Key Laboratory of Sichuan Province, Sichuan University of Science and Engineering, Zigong, China; School of Automation and Information Engineering, Sichuan University of Science and Engineering, Zigong, China.
| |
Collapse
|
248
|
A unified neurocomputational bilateral model of spoken language production in healthy participants and recovery in poststroke aphasia. Proc Natl Acad Sci U S A 2020; 117:32779-32790. [PMID: 33273118 PMCID: PMC7768768 DOI: 10.1073/pnas.2010193117] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/20/2022] Open
Abstract
Studies of healthy and impaired language have generated many verbally described hypotheses. While these verbal descriptions have advanced our understanding of language processing, some explanations are mutually incompatible, and it is unclear how they work mechanistically. We constructed a neurocomputational bilateral model of spoken language production to simulate a range of phenomena in healthy participants and patients with aphasia simultaneously, including language lateralization, impaired performance after left but not right damage, and hemispheric involvement in plasticity-dependent recovery. The model demonstrates how seemly contradictory findings can be simulated within a single framework. This provides a coherent mechanistic account of language lateralization and recovery from poststroke aphasia. Understanding the processes underlying normal, impaired, and recovered language performance has been a long-standing goal for cognitive and clinical neuroscience. Many verbally described hypotheses about language lateralization and recovery have been generated. However, they have not been considered within a single, unified, and implemented computational framework, and the literatures on healthy participants and patients are largely separated. These investigations also span different types of data, including behavioral results and functional MRI brain activations, which augment the challenge for any unified theory. Consequently, many key issues, apparent contradictions, and puzzles remain to be solved. We developed a neurocomputational, bilateral pathway model of spoken language production, designed to provide a unified framework to simulate different types of data from healthy participants and aphasic patients. The model encapsulates key computational principles (differential computational capacity, emergent division of labor across pathways, experience-dependent plasticity-related recovery) and provides an explanation for the bilateral yet asymmetric lateralization of language in healthy participants, chronic aphasia after left rather than right hemisphere lesions, and the basis of partial recovery in patients. The model provides a formal basis for understanding the relationship between behavioral performance and brain activation. The unified model is consistent with the degeneracy and variable neurodisplacement theories of language recovery, and adds computational insights to these hypotheses regarding the neural machinery underlying language processing and plasticity-related recovery following damage.
Collapse
|
249
|
Jin D, Qin Z, Yang M, Chen P. A Novel Neural Model With Lateral Interaction for Learning Tasks. Neural Comput 2020; 33:528-551. [PMID: 33253032 DOI: 10.1162/neco_a_01345] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We propose a novel neural model with lateral interaction for learning tasks. The model consists of two functional fields: an elementary field to extract features and a high-level field to store and recognize patterns. Each field is composed of some neurons with lateral interaction, and the neurons in different fields are connected by the rules of synaptic plasticity. The model is established on the current research of cognition and neuroscience, making it more transparent and biologically explainable. Our proposed model is applied to data classification and clustering. The corresponding algorithms share similar processes without requiring any parameter tuning and optimization processes. Numerical experiments validate that the proposed model is feasible in different learning tasks and superior to some state-of-the-art methods, especially in small sample learning, one-shot learning, and clustering.
Collapse
Affiliation(s)
- Dequan Jin
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Ziyan Qin
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Murong Yang
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| | - Penghe Chen
- School of Mathematics and Information Science, Guangxi University, 530004, P.R.C.
| |
Collapse
|
250
|
Shamma S, Patel P, Mukherjee S, Marion G, Khalighinejad B, Han C, Herrero J, Bickel S, Mehta A, Mesgarani N. Learning Speech Production and Perception through Sensorimotor Interactions. Cereb Cortex Commun 2020; 2:tgaa091. [PMID: 33506209 PMCID: PMC7811190 DOI: 10.1093/texcom/tgaa091] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Revised: 11/19/2020] [Accepted: 11/23/2020] [Indexed: 12/21/2022] Open
Abstract
Action and perception are closely linked in many behaviors necessitating a close coordination between sensory and motor neural processes so as to achieve a well-integrated smoothly evolving task performance. To investigate the detailed nature of these sensorimotor interactions, and their role in learning and executing the skilled motor task of speaking, we analyzed ECoG recordings of responses in the high-γ band (70-150 Hz) in human subjects while they listened to, spoke, or silently articulated speech. We found elaborate spectrotemporally modulated neural activity projecting in both "forward" (motor-to-sensory) and "inverse" directions between the higher-auditory and motor cortical regions engaged during speaking. Furthermore, mathematical simulations demonstrate a key role for the forward projection in "learning" to control the vocal tract, beyond its commonly postulated predictive role during execution. These results therefore offer a broader view of the functional role of the ubiquitous forward projection as an important ingredient in learning, rather than just control, of skilled sensorimotor tasks.
Collapse
Affiliation(s)
- Shihab Shamma
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA.,Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Prachi Patel
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Shoutik Mukherjee
- Department of Electrical and Computer Engineering, Institute for Systems Research, University of Maryland, College Park, MD 20742, USA
| | - Guilhem Marion
- Laboratoire des Systèmes Perceptifs, Department des Etudes Cognitive, École Normale Supérieure, PSL University, 75005 Paris, France
| | - Bahar Khalighinejad
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Cong Han
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Jose Herrero
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Stephan Bickel
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA
| | - Ashesh Mehta
- Neurosurgery, Hofstra Northwell School of Medicine, Manhasset, NY, USA.,The Feinstein Institutes for Medical Research, Manhasset, NY 11030, USA
| | - Nima Mesgarani
- Department of Electrical Engineering, Columbia University, New York, NY 10027, USA.,Mortimer B Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| |
Collapse
|