1
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
2
|
Wu J, Nie S, Li C, Wang X, Peng Y, Shang J, Diao L, Ding H, Si Q, Wang S, Tong R, Li Y, Sun L, Zhang J. Sound-localization-related activation and functional connectivity of dorsal auditory pathway in relation to demographic, cognitive, and behavioral characteristics in age-related hearing loss. Front Neurosci 2024; 18:1353413. [PMID: 38562303 PMCID: PMC10982313 DOI: 10.3389/fnins.2024.1353413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
Background Patients with age-related hearing loss (ARHL) often struggle with tracking and locating sound sources, but the neural signature associated with these impairments remains unclear. Materials and methods Using a passive listening task with stimuli from five different horizontal directions in functional magnetic resonance imaging, we defined functional regions of interest (ROIs) of the auditory "where" pathway based on the data of previous literatures and young normal hearing listeners (n = 20). Then, we investigated associations of the demographic, cognitive, and behavioral features of sound localization with task-based activation and connectivity of the ROIs in ARHL patients (n = 22). Results We found that the increased high-level region activation, such as the premotor cortex and inferior parietal lobule, was associated with increased localization accuracy and cognitive function. Moreover, increased connectivity between the left planum temporale and left superior frontal gyrus was associated with increased localization accuracy in ARHL. Increased connectivity between right primary auditory cortex and right middle temporal gyrus, right premotor cortex and left anterior cingulate cortex, and right planum temporale and left lingual gyrus in ARHL was associated with decreased localization accuracy. Among the ARHL patients, the task-dependent brain activation and connectivity of certain ROIs were associated with education, hearing loss duration, and cognitive function. Conclusion Consistent with the sensory deprivation hypothesis, in ARHL, sound source identification, which requires advanced processing in the high-level cortex, is impaired, whereas the right-left discrimination, which relies on the primary sensory cortex, is compensated with a tendency to recruit more resources concerning cognition and attention to the auditory sensory cortex. Overall, this study expanded our understanding of the neural mechanisms contributing to sound localization deficits associated with ARHL and may serve as a potential imaging biomarker for investigating and predicting anomalous sound localization.
Collapse
Affiliation(s)
- Junzhi Wu
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Shuai Nie
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Chunlin Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Xing Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ye Peng
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Jiaqi Shang
- Center of Clinical Hearing, Shandong Second Provincial General Hospital, Jinan, Shandong, China
| | - Linan Diao
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Hongping Ding
- College of Special Education, Binzhou Medical University, Yantai, Shandong, China
| | - Qian Si
- School of Cyber Science and Technology, Beihang University, Beijing, China
| | - Songjian Wang
- Key Laboratory of Otolaryngology, Head and Neck Surgery, Ministry of Education, Beijing Institute of Otolaryngology, Beijing, China
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Renjie Tong
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Yutang Li
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Liwei Sun
- School of Biomedical Engineering, Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otorhinolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Rolls ET, Rauschecker JP, Deco G, Huang CC, Feng J. Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057, USA
- Institute for Advanced Study, Technical University, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
4
|
Sun L, Li C, Wang S, Si Q, Lin M, Wang N, Sun J, Li H, Liang Y, Wei J, Zhang X, Zhang J. Left frontal eye field encodes sound locations during passive listening. Cereb Cortex 2023; 33:3067-3079. [PMID: 35858212 DOI: 10.1093/cercor/bhac261] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Revised: 06/02/2022] [Accepted: 06/04/2022] [Indexed: 11/12/2022] Open
Abstract
Previous studies reported that auditory cortices (AC) were mostly activated by sounds coming from the contralateral hemifield. As a result, sound locations could be encoded by integrating opposite activations from both sides of AC ("opponent hemifield coding"). However, human auditory "where" pathway also includes a series of parietal and prefrontal regions. It was unknown how sound locations were represented in those high-level regions during passive listening. Here, we investigated the neural representation of sound locations in high-level regions by voxel-level tuning analysis, regions-of-interest-level (ROI-level) laterality analysis, and ROI-level multivariate pattern analysis. Functional magnetic resonance imaging data were collected while participants listened passively to sounds from various horizontal locations. We found that opponent hemifield coding of sound locations not only existed in AC, but also spanned over intraparietal sulcus, superior parietal lobule, and frontal eye field (FEF). Furthermore, multivariate pattern representation of sound locations in both hemifields could be observed in left AC, right AC, and left FEF. Overall, our results demonstrate that left FEF, a high-level region along the auditory "where" pathway, encodes sound locations during passive listening in two ways: a univariate opponent hemifield activation representation and a multivariate full-field activation pattern representation.
Collapse
Affiliation(s)
- Liwei Sun
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Chunlin Li
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Songjian Wang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Qian Si
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Meng Lin
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Ningyu Wang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| | - Jun Sun
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Hongjun Li
- Department of Radiology, Beijing Youan Hospital, Capital Medical University, Beijing 100069, China
| | - Ying Liang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Jing Wei
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Xu Zhang
- School of Biomedical Engineering, Capital Medical University, Beijing 100069, China
- Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Capital Medical University, Beijing 100069, China
- Beijing Key Laboratory of Fundamental Research on Biomechanics in Clinical Application, Capital Medical University, Beijing 100069, China
| | - Juan Zhang
- Department of Otorhinolaryngology, Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing 100020, China
| |
Collapse
|
5
|
Wang Y, Lu L, Zou G, Zheng L, Qin L, Zou Q, Gao JH. Disrupted neural tracking of sound localization during non-rapid eye movement sleep. Neuroimage 2022; 260:119490. [PMID: 35853543 DOI: 10.1016/j.neuroimage.2022.119490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/16/2022] [Accepted: 07/15/2022] [Indexed: 11/27/2022] Open
Abstract
Spatial hearing in humans is a high-level auditory process that is crucial to rapid sound localization in the environment. Both neurophysiological models with animals and neuroimaging evidence from human subjects in the wakefulness stage suggest that the localization of auditory objects is mainly located in the posterior auditory cortex. However, whether this cognitive process is preserved during sleep remains unclear. To fill this research gap, we investigated the sleeping brain's capacity to identify sound locations by recording simultaneous electroencephalographic (EEG) and magnetoencephalographic (MEG) signals during wakefulness and non-rapid eye movement (NREM) sleep in human subjects. Using the frequency-tagging paradigm, the subjects were presented with a basic syllable sequence at 5 Hz and a location change that occurred every three syllables, resulting in a sound localization shift at 1.67 Hz. The EEG and MEG signals were used for sleep scoring and neural tracking analyses, respectively. Neural tracking responses at 5 Hz reflecting basic auditory processing were observed during both wakefulness and NREM sleep, although the responses during sleep were weaker than those during wakefulness. Cortical responses at 1.67 Hz, which correspond to the sound location change, were observed during wakefulness regardless of attention to the stimuli but vanished during NREM sleep. These results for the first time indicate that sleep preserves basic auditory processing but disrupts the higher-order brain function of sound localization.
Collapse
Affiliation(s)
- Yan Wang
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; Chinese Institute for Brain Research, Beijing 102206, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China.
| | - Guangyuan Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Li Zheng
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Lang Qin
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China
| | - Qihong Zou
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China.
| | - Jia-Hong Gao
- Center for MRI Research, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing 100871, China; PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China; Beijing City Key Lab for Medical Physics and Engineering, Institution of Heavy Ion Physics, School of Physics, Peking University, Beijing 100871, China; National Biomedical Imaging Center, Peking University, Beijing 100871, China.
| |
Collapse
|
6
|
Josef-Golubić S. Triple model of auditory sensory processing: a novel gating stream directly links primary auditory areas to executive prefrontal cortex. Acta Clin Croat 2021; 59:721-728. [PMID: 34285443 PMCID: PMC8253058 DOI: 10.20471/acc.2020.59.04.19] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2018] [Accepted: 10/09/2018] [Indexed: 11/24/2022] Open
Abstract
The generally accepted model of sensory processing of visual and auditory stimuli assumes two major parallel processing streams, ventral and dorsal, which comprise functionally and anatomically distinct but interacting processes in which the ventral stream supports stimulus identification, and the dorsal stream is involved in recognizing the stimulus spatial location and sensori-motor integration functions. However, recent studies suggest the existence of a third, very fast sensory processing pathway, a gating stream that directly links the primary auditory cortices to the executive prefrontal cortex within the first 50 milliseconds after presentation of a stimulus, bypassing hierarchical structure of the ventral and dorsal pathways. Gating stream propagates the sensory gating phenomenon, which serves as a basic protective mechanism preventing irrelevant, repeated information from recurrent sensory processing. The goal of the present paper is to introduce the novel ‘three-stream’ model of auditory processing, including the new fast sensory processing stream, i.e. gating stream, alongside the well-affirmed dorsal and ventral sensory processing pathways. The impairments in sensory processing along the gating stream have been found to be strongly involved in the pathophysiological sensory processing in Alzheimer’s disease and could be the underlying issue in numerous neuropsychiatric disorders and diseases that are linked to the pathological sensory gating inhibition, such as schizophrenia, post-traumatic stress disorder, bipolar disorder or attention deficit hyperactivity disorder.
Collapse
Affiliation(s)
- Sanja Josef-Golubić
- Department of Physics, Faculty of Science, University of Zagreb, Zagreb, Croatia
| |
Collapse
|
7
|
Ren J, Hubbard CS, Ahveninen J, Cui W, Li M, Peng X, Luan G, Han Y, Li Y, Shinn AK, Wang D, Li L, Liu H. Dissociable Auditory Cortico-Cerebellar Pathways in the Human Brain Estimated by Intrinsic Functional Connectivity. Cereb Cortex 2021; 31:2898-2912. [PMID: 33497437 PMCID: PMC8107796 DOI: 10.1093/cercor/bhaa398] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 11/10/2020] [Accepted: 12/11/2020] [Indexed: 12/16/2022] Open
Abstract
The cerebellum, a structure historically associated with motor control, has more recently been implicated in several higher-order auditory-cognitive functions. However, the exact functional pathways that mediate cerebellar influences on auditory cortex (AC) remain unclear. Here, we sought to identify auditory cortico-cerebellar pathways based on intrinsic functional connectivity magnetic resonance imaging. In contrast to previous connectivity studies that principally consider the AC as a single functionally homogenous unit, we mapped the cerebellar connectivity across different parts of the AC. Our results reveal that auditory subareas demonstrating different levels of interindividual functional variability are functionally coupled with distinct cerebellar regions. Moreover, auditory and sensorimotor areas show divergent cortico-cerebellar connectivity patterns, although sensorimotor areas proximal to the AC are often functionally grouped with the AC in previous connectivity-based network analyses. Lastly, we found that the AC can be functionally segmented into highly similar subareas based on either cortico-cerebellar or cortico-cortical functional connectivity, suggesting the existence of multiple parallel auditory cortico-cerebellar circuits that involve different subareas of the AC. Overall, the present study revealed multiple auditory cortico-cerebellar pathways and provided a fine-grained map of AC subareas, indicative of the critical role of the cerebellum in auditory processing and multisensory integration.
Collapse
Affiliation(s)
- Jianxun Ren
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Catherine S Hubbard
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Weigang Cui
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
- Department of Automation Sciences and Electrical Engineering, Beihang University, 100083 Beijing, China
| | - Meiling Li
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Xiaolong Peng
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| | - Guoming Luan
- Department of Neurosurgery, Comprehensive Epilepsy Center, Sanbo Brain Hospital, Capital Medical University, 100093 Beijing, China
| | - Ying Han
- Department of Neurology, Xuanwu Hospital of Capital Medical University, 100053 Beijing, China
| | - Yang Li
- Department of Automation Sciences and Electrical Engineering, Beihang University, 100083 Beijing, China
| | - Ann K Shinn
- Psychotic Disorders Division, McLean Hospital, Harvard Medical School, Belmont, MA 02478, USA
| | - Danhong Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
| | - Luming Li
- National Engineering Laboratory for Neuromodulation, School of Aerospace Engineering, Tsinghua University, 100084 Beijing, China
- Precision Medicine & Healthcare Research Center, Tsinghua-Berkeley Shenzhen Institute, Tsinghua University, 518055 Shenzhen, China
- IDG/McGovern Institute for Brain Research at Tsinghua University, 100084 Beijing, China
| | - Hesheng Liu
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA 02129, USA
- Department of Neuroscience, Medical University of South Carolina, Charleston, SC 29425, USA
| |
Collapse
|
8
|
Rapid computation of TMS-induced E-fields using a dipole-based magnetic stimulation profile approach. Neuroimage 2021; 237:118097. [PMID: 33940151 PMCID: PMC8353625 DOI: 10.1016/j.neuroimage.2021.118097] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/25/2021] [Accepted: 04/23/2021] [Indexed: 12/25/2022] Open
Abstract
BACKGROUND TMS neuronavigation with on-line display of the induced electric field (E-field) has the potential to improve quantitative targeting and dosing of stimulation, but present commercially available solutions are limited by simplified approximations. OBJECTIVE Developing a near real-time method for accurate approximation of TMS induced E-fields with subject-specific high-resolution surface-based head models that can be utilized for TMS navigation. METHODS Magnetic dipoles are placed on a closed surface enclosing an MRI-based head model of the subject to define a set of basis functions for the incident and total E-fields that define the subject's Magnetic Stimulation Profile (MSP). The near real-time speed is achieved by recognizing that the total E-field of the coil only depends on the incident E-field and the conductivity boundary geometry. The total E-field for any coil position can be obtained by matching the incident field of the stationary dipole basis set with the incident E-field of the moving coil and applying the same basis coefficients to the total E-field basis functions. RESULTS Comparison of the MSP-based approximation with an established TMS solver shows great agreement in the E-field amplitude (relative maximum error around 5%) and the spatial distribution patterns (correlation >98%). Computation of the E-field took ~100 ms on a cortical surface mesh with 120k facets. CONCLUSION The numerical accuracy and speed of the MSP approximation method make it well suited for a wide range of computational tasks including interactive planning, targeting, dosing, and visualization of the intracranial E-fields for near real-time guidance of coil positioning.
Collapse
|
9
|
Schäfer E, Vedoveli AE, Righetti G, Gamerdinger P, Knipper M, Tropitzsch A, Karnath HO, Braun C, Li Hegner Y. Activities of the Right Temporo-Parieto-Occipital Junction Reflect Spatial Hearing Ability in Cochlear Implant Users. Front Neurosci 2021; 15:613101. [PMID: 33776632 PMCID: PMC7994335 DOI: 10.3389/fnins.2021.613101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 02/18/2021] [Indexed: 11/13/2022] Open
Abstract
Spatial hearing is critical for us not only to orient ourselves in space, but also to follow a conversation with multiple speakers involved in a complex sound environment. The hearing ability of people who suffered from severe sensorineural hearing loss can be restored by cochlear implants (CIs), however, with a large outcome variability. Yet, the causes of the CI performance variability remain incompletely understood. Despite the CI-based restoration of the peripheral auditory input, central auditory processing might still not function fully. Here we developed a multi-modal repetition suppression (MMRS) paradigm that is capable of capturing stimulus property-specific processing, in order to identify the neural correlates of spatial hearing and potential central neural indexes useful for the rehabilitation of sound localization in CI users. To this end, 17 normal hearing and 13 CI participants underwent the MMRS task while their brain activity was recorded with a 256-channel electroencephalography (EEG). The participants were required to discriminate between the probe sound location coming from a horizontal array of loudspeakers. The EEG MMRS response following the probe sound was elicited at various brain regions and at different stages of processing. Interestingly, the more similar this differential MMRS response in the right temporo-parieto-occipital (TPO) junction in CI users was to the normal hearing group, the better was the spatial hearing performance in individual CI users. Based on this finding, we suggest that the differential MMRS response at the right TPO junction could serve as a central neural index for intact or impaired sound localization abilities.
Collapse
Affiliation(s)
| | | | | | | | - Marlies Knipper
- Department of Otolaryngology, Head and Neck Surgery, Tübingen Hearing Research Centre, University of Tübingen, Tübingen, Germany
| | - Anke Tropitzsch
- Comprehensive Cochlear Implant Center, ENT Clinic Tübingen, Tübingen University Hospital, Tübingen, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Christoph Braun
- MEG Center, University of Tübingen, Tübingen, Germany.,CIMeC, Center for Mind/Brain Research, University of Trento, Rovereto, Italy.,DiPsCo, Department of Psychology and Cognitive Science, Rovereto, Italy
| | - Yiwen Li Hegner
- MEG Center, University of Tübingen, Tübingen, Germany.,Center of Neurology, Department of Neurology and Epileptology, Hertie-Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| |
Collapse
|
10
|
Correlates of Auditory Decision-Making in Prefrontal, Auditory, and Basal Lateral Amygdala Cortical Areas. J Neurosci 2020; 41:1301-1316. [PMID: 33303679 DOI: 10.1523/jneurosci.2217-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2020] [Revised: 11/02/2020] [Accepted: 11/26/2020] [Indexed: 11/21/2022] Open
Abstract
Spatial selective listening and auditory choice underlie important processes including attending to a speaker at a cocktail party and knowing how (or whether) to respond. To examine task encoding and the relative timing of potential neural substrates underlying these behaviors, we developed a spatial selective detection paradigm for monkeys, and recorded activity in primary auditory cortex (AC), dorsolateral prefrontal cortex (dlPFC), and the basolateral amygdala (BLA). A comparison of neural responses among these three areas showed that, as expected, AC encoded the side of the cue and target characteristics before dlPFC and BLA. Interestingly, AC also encoded the choice of the monkey before dlPFC and around the time of BLA. Generally, BLA showed weak responses to all task features except the choice. Decoding analyses suggested that errors followed from a failure to encode the target stimulus in both AC and dlPFC, but again, these differences arose earlier in AC. The similarities between AC and dlPFC responses were abolished during passive sensory stimulation with identical trial conditions, suggesting that the robust sensory encoding in dlPFC is contextually gated. Thus, counter to a strictly PFC-driven decision process, in this spatial selective listening task AC neural activity represents the sensory and decision information before dlPFC. Unlike in the visual domain, in this auditory task, the BLA does not appear to be robustly involved in selective spatial processing.SIGNIFICANCE STATEMENT We examined neural correlates of an auditory spatial selective listening task by recording single-neuron activity in behaving monkeys from the amygdala, dorsolateral prefrontal cortex, and auditory cortex. We found that auditory cortex coded spatial cues and choice-related activity before dorsolateral prefrontal cortex or the amygdala. Auditory cortex also had robust delay period activity. Therefore, we found that auditory cortex could support the neural computations that underlie the behavioral processes in the task.
Collapse
|
11
|
Vannson N, Strelnikov K, James CJ, Deguine O, Barone P, Marx M. Evidence of a functional reorganization in the auditory dorsal stream following unilateral hearing loss. Neuropsychologia 2020; 149:107683. [PMID: 33212140 DOI: 10.1016/j.neuropsychologia.2020.107683] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2020] [Revised: 10/16/2020] [Accepted: 11/08/2020] [Indexed: 12/11/2022]
Abstract
Unilateral hearing loss (UHL) generates a disruption of binaural hearing mechanisms, which impairs sound localization and speech understanding in noisy environments. We conducted an original study using fMRI and psychoacoustic assessments to investigate the relationships between the extent of cortical reorganization across the auditory areas for UHL patients, the severity of unilateral hearing loss, and the deficit in binaural abilities. Twenty-eight volunteers (14 UHL patients) were recruited (twenty-two females and six males). The brain imaging analysis demonstrated that UHL induces a shift in aural dominance favoring the better ear, with a cortical reorganization located in the non-primary auditory areas, ipsilateral (same side) to the better ear. This reorganization is correlated not only to the hearing loss severity but also to spatial localization abilities. A regression analysis between brain activity and patient's performance clearly showed that the spatial hearing deficit was linked to a functional alteration of the posterior auditory areas known to process spatial hearing. Altogether, our study reveals that UHL alters the dorsal auditory stream, which is deleterious to spatial hearing.
Collapse
Affiliation(s)
- Nicolas Vannson
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Cochlear France SAS, Toulouse, France.
| | | | | | - Olivier Deguine
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| | - Pascal Barone
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France
| | - Mathieu Marx
- Brain and Cognition Research Centre, University of Toulouse Paul Sabatier, Toulouse, France; Brain and Cognition Research Centre, CNRS-UMR, 5549, Toulouse, France; Service d'Otologie, Otoneurologie et ORL pédiatrique, Hôpital Pierre-Paul Riquet, CHU Toulouse Purpan, France
| |
Collapse
|
12
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
13
|
Abstract
There are functional and anatomical distinctions between the neural systems involved in the recognition of sounds in the environment and those involved in the sensorimotor guidance of sound production and the spatial processing of sound. Evidence for the separation of these processes has historically come from disparate literatures on the perception and production of speech, music and other sounds. More recent evidence indicates that there are computational distinctions between the rostral and caudal primate auditory cortex that may underlie functional differences in auditory processing. These functional differences may originate from differences in the response times and temporal profiles of neurons in the rostral and caudal auditory cortex, suggesting that computational accounts of primate auditory pathways should focus on the implications of these temporal response differences.
Collapse
|
14
|
Neural correlates of perceptual switching while listening to bistable auditory streaming stimuli. Neuroimage 2020; 204:116220. [DOI: 10.1016/j.neuroimage.2019.116220] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2019] [Revised: 08/19/2019] [Accepted: 09/19/2019] [Indexed: 11/15/2022] Open
|
15
|
Devos P, Aletta F, Thomas P, Petrovic M, Vander Mynsbrugge T, Van de Velde D, De Vriendt P, Botteldooren D. Designing Supportive Soundscapes for Nursing Home Residents with Dementia. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2019; 16:ijerph16244904. [PMID: 31817300 PMCID: PMC6950055 DOI: 10.3390/ijerph16244904] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/31/2019] [Revised: 11/22/2019] [Accepted: 11/28/2019] [Indexed: 12/12/2022]
Abstract
Sound and its resulting soundscape is a major appraisal component of the living environment. Where environmental sounds (e.g., outdoor traffic sounds) are often perceived as negative, a soundscape (e.g., containing natural sounds) can also have a positive effect on health and well-being. This supportive effect of a soundscape is getting increasing attention for use in practice. This paper addresses the design of a supportive sonic environment for persons with dementia in nursing homes. Starting from a review of key mechanisms related to sonic perception, cognitive deficits and related behavior, a framework is derived for the composition of a sonic environment for persons with dementia. The proposed framework is centered around using acoustic stimuli for influencing mood, stimulating the feeling of safety and triggering a response in a person. These stimuli are intended to be deployed as added sounds in a nursing home to improve the well-being and behavior of the residents.
Collapse
Affiliation(s)
- Paul Devos
- Department of Information Technology, Ghent University, 9052 Ghent, Belgium; (F.A.); (P.T.); (D.B.)
- Correspondence:
| | - Francesco Aletta
- Department of Information Technology, Ghent University, 9052 Ghent, Belgium; (F.A.); (P.T.); (D.B.)
- Institute for Environmental Design and Engineering, University College London, London WC1H0NN, UK
| | - Pieter Thomas
- Department of Information Technology, Ghent University, 9052 Ghent, Belgium; (F.A.); (P.T.); (D.B.)
| | - Mirko Petrovic
- Department of Internal Medicine and Paediatrics, Ghent University, 9000 Ghent, Belgium;
| | - Tara Vander Mynsbrugge
- Department of Occupational Therapy, Artevelde University College, 9000 Ghent, Belgium; (T.V.M.); (D.V.d.V.); (P.D.V.)
| | - Dominique Van de Velde
- Department of Occupational Therapy, Artevelde University College, 9000 Ghent, Belgium; (T.V.M.); (D.V.d.V.); (P.D.V.)
- Department of Occupational Therapy, Ghent University, 9000 Ghent, Belgium
| | - Patricia De Vriendt
- Department of Occupational Therapy, Artevelde University College, 9000 Ghent, Belgium; (T.V.M.); (D.V.d.V.); (P.D.V.)
- Department of Occupational Therapy, Ghent University, 9000 Ghent, Belgium
| | - Dick Botteldooren
- Department of Information Technology, Ghent University, 9052 Ghent, Belgium; (F.A.); (P.T.); (D.B.)
| |
Collapse
|
16
|
Abstract
Humans and other animals use spatial hearing to rapidly localize events in the environment. However, neural encoding of sound location is a complex process involving the computation and integration of multiple spatial cues that are not represented directly in the sensory organ (the cochlea). Our understanding of these mechanisms has increased enormously in the past few years. Current research is focused on the contribution of animal models for understanding human spatial audition, the effects of behavioural demands on neural sound location encoding, the emergence of a cue-independent location representation in the auditory cortex, and the relationship between single-source and concurrent location encoding in complex auditory scenes. Furthermore, computational modelling seeks to unravel how neural representations of sound source locations are derived from the complex binaural waveforms of real-life sounds. In this article, we review and integrate the latest insights from neurophysiological, neuroimaging and computational modelling studies of mammalian spatial hearing. We propose that the cortical representation of sound location emerges from recurrent processing taking place in a dynamic, adaptive network of early (primary) and higher-order (posterior-dorsal and dorsolateral prefrontal) auditory regions. This cortical network accommodates changing behavioural requirements and is especially relevant for processing the location of real-life, complex sounds and complex auditory scenes.
Collapse
|
17
|
Representation of Auditory Motion Directions and Sound Source Locations in the Human Planum Temporale. J Neurosci 2019; 39:2208-2220. [PMID: 30651333 DOI: 10.1523/jneurosci.2289-18.2018] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2018] [Revised: 12/20/2018] [Accepted: 12/21/2018] [Indexed: 11/21/2022] Open
Abstract
The ability to compute the location and direction of sounds is a crucial perceptual skill to efficiently interact with dynamic environments. How the human brain implements spatial hearing is, however, poorly understood. In our study, we used fMRI to characterize the brain activity of male and female humans listening to sounds moving left, right, up, and down as well as static sounds. Whole-brain univariate results contrasting moving and static sounds varying in their location revealed a robust functional preference for auditory motion in bilateral human planum temporale (hPT). Using independently localized hPT, we show that this region contains information about auditory motion directions and, to a lesser extent, sound source locations. Moreover, hPT showed an axis of motion organization reminiscent of the functional organization of the middle-temporal cortex (hMT+/V5) for vision. Importantly, whereas motion direction and location rely on partially shared pattern geometries in hPT, as demonstrated by successful cross-condition decoding, the responses elicited by static and moving sounds were, however, significantly distinct. Altogether, our results demonstrate that the hPT codes for auditory motion and location but that the underlying neural computation linked to motion processing is more reliable and partially distinct from the one supporting sound source location.SIGNIFICANCE STATEMENT Compared with what we know about visual motion, little is known about how the brain implements spatial hearing. Our study reveals that motion directions and sound source locations can be reliably decoded in the human planum temporale (hPT) and that they rely on partially shared pattern geometries. Our study, therefore, sheds important new light on how computing the location or direction of sounds is implemented in the human auditory cortex by showing that those two computations rely on partially shared neural codes. Furthermore, our results show that the neural representation of moving sounds in hPT follows a "preferred axis of motion" organization, reminiscent of the coding mechanisms typically observed in the occipital middle-temporal cortex (hMT+/V5) region for computing visual motion.
Collapse
|
18
|
Raij T, Nummenmaa A, Marin MF, Porter D, Furtak S, Setsompop K, Milad MR. Prefrontal Cortex Stimulation Enhances Fear Extinction Memory in Humans. Biol Psychiatry 2018; 84:129-137. [PMID: 29246436 PMCID: PMC5936658 DOI: 10.1016/j.biopsych.2017.10.022] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Revised: 10/07/2017] [Accepted: 10/10/2017] [Indexed: 01/06/2023]
Abstract
BACKGROUND Animal fear conditioning studies have illuminated neuronal mechanisms of learned associations between sensory stimuli and fear responses. In rats, brief electrical stimulation of the infralimbic cortex has been shown to reduce conditioned freezing during recall of extinction memory. Here, we translated this finding to humans with magnetic resonance imaging-navigated transcranial magnetic stimulation (TMS). METHODS Subjects (N = 28) were aversively conditioned to two different cues (day 1). During extinction learning (day 2), TMS was paired with one of the conditioned cues but not the other. TMS parameters were similar to those used in rat infralimbic cortex: brief pulse trains (300 ms at 20 Hz) starting 100 ms after cue onset, total of four trains (28 TMS pulses). TMS was applied to one of two targets in the left frontal cortex, one functionally connected (target 1) and the other unconnected (target 2, control) with a human homologue of infralimbic cortex in the ventromedial prefrontal cortex. Skin conductance responses were used as an index of conditioned fear. RESULTS During extinction recall (day 3), the cue paired with TMS to target 1 showed significantly reduced skin conductance responses, whereas TMS to target 2 had no effect. Further, we built group-level maps that weighted TMS-induced electric fields and diffusion magnetic resonance imaging connectivity estimates with fear level. These maps revealed distinct cortical regions and large-scale networks associated with reduced versus increased fear. CONCLUSIONS The results showed that spatiotemporally focused TMS may enhance extinction learning and/or consolidation of extinction memory and suggested novel cortical areas and large-scale networks for targeting in future studies.
Collapse
Affiliation(s)
- Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital/Massachusetts Institute of Technology, Charlestown, Massachusetts; Harvard Medical School, Boston, Massachusetts.
| | - Aapo Nummenmaa
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, MA, USA,Harvard Medical School, Boston, MA, USA
| | - Marie-France Marin
- Harvard Medical School, Boston, MA, USA,MGH Department of Psychiatry, MA, USA
| | | | | | - Kawin Setsompop
- MGH/MIT/HMS Athinoula A. Martinos Center for Biomedical Imaging, MA, USA,Harvard Medical School, Boston, MA, USA
| | - Mohammed R. Milad
- Harvard Medical School, Boston, MA, USA,MGH Department of Psychiatry, MA, USA
| |
Collapse
|
19
|
Chronometry on Spike-LFP Responses Reveals the Functional Neural Circuitry of Early Auditory Cortex Underlying Sound Processing and Discrimination. eNeuro 2018; 5:eN-NWR-0420-17. [PMID: 29971252 PMCID: PMC6028825 DOI: 10.1523/eneuro.0420-17.2018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2017] [Revised: 05/24/2018] [Accepted: 05/25/2018] [Indexed: 11/21/2022] Open
Abstract
Animals and humans rapidly detect specific features of sounds, but the time courses of the underlying neural response for different stimulus categories is largely unknown. Furthermore, the intricate functional organization of auditory information processing pathways is poorly understood. Here, we computed neuronal response latencies from simultaneously recorded spike trains and local field potentials (LFPs) along the first two stages of cortical sound processing, primary auditory cortex (A1) and lateral belt (LB), of awake, behaving macaques. Two types of response latencies were measured for spike trains as well as LFPs: (1) onset latency, time-locked to onset of external auditory stimuli; and (2) selection latency, time taken from stimulus onset to a selective response to a specific stimulus category. Trial-by-trial LFP onset latencies predominantly reflecting synaptic input arrival typically preceded spike onset latencies, assumed to be representative of neuronal output indicating that both areas may receive input environmental signals and relay the information to the next stage. In A1, simple sounds, such as pure tones (PTs), yielded shorter spike onset latencies compared to complex sounds, such as monkey vocalizations ("Coos"). This trend was reversed in LB, indicating a hierarchical functional organization of auditory cortex in the macaque. LFP selection latencies in A1 were always shorter than those in LB for both PT and Coo reflecting the serial arrival of stimulus-specific information in these areas. Thus, chronometry on spike-LFP signals revealed some of the effective neural circuitry underlying complex sound discrimination.
Collapse
|
20
|
Activity in Human Auditory Cortex Represents Spatial Separation Between Concurrent Sounds. J Neurosci 2018; 38:4977-4984. [PMID: 29712782 DOI: 10.1523/jneurosci.3323-17.2018] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2017] [Revised: 03/05/2018] [Accepted: 03/09/2018] [Indexed: 11/21/2022] Open
Abstract
The primary and posterior auditory cortex (AC) are known for their sensitivity to spatial information, but how this information is processed is not yet understood. AC that is sensitive to spatial manipulations is also modulated by the number of auditory streams present in a scene (Smith et al., 2010), suggesting that spatial and nonspatial cues are integrated for stream segregation. We reasoned that, if this is the case, then it is the distance between sounds rather than their absolute positions that is essential. To test this hypothesis, we measured human brain activity in response to spatially separated concurrent sounds with fMRI at 7 tesla in five men and five women. Stimuli were spatialized amplitude-modulated broadband noises recorded for each participant via in-ear microphones before scanning. Using a linear support vector machine classifier, we investigated whether sound location and/or location plus spatial separation between sounds could be decoded from the activity in Heschl's gyrus and the planum temporale. The classifier was successful only when comparing patterns associated with the conditions that had the largest difference in perceptual spatial separation. Our pattern of results suggests that the representation of spatial separation is not merely the combination of single locations, but rather is an independent feature of the auditory scene.SIGNIFICANCE STATEMENT Often, when we think of auditory spatial information, we think of where sounds are coming from-that is, the process of localization. However, this information can also be used in scene analysis, the process of grouping and segregating features of a soundwave into objects. Essentially, when sounds are further apart, they are more likely to be segregated into separate streams. Here, we provide evidence that activity in the human auditory cortex represents the spatial separation between sounds rather than their absolute locations, indicating that scene analysis and localization processes may be independent.
Collapse
|
21
|
Kryklywy JH, Macpherson EA, Mitchell DGV. Decoding auditory spatial and emotional information encoding using multivariate versus univariate techniques. Exp Brain Res 2018; 236:945-953. [PMID: 29374776 PMCID: PMC5887003 DOI: 10.1007/s00221-018-5185-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Accepted: 01/22/2018] [Indexed: 11/27/2022]
Abstract
Emotion can have diverse effects on behaviour and perception, modulating function in some circumstances, and sometimes having little effect. Recently, it was identified that part of the heterogeneity of emotional effects could be due to a dissociable representation of emotion in dual pathway models of sensory processing. Our previous fMRI experiment using traditional univariate analyses showed that emotion modulated processing in the auditory ‘what’ but not ‘where’ processing pathway. The current study aims to further investigate this dissociation using a more recently emerging multi-voxel pattern analysis searchlight approach. While undergoing fMRI, participants localized sounds of varying emotional content. A searchlight multi-voxel pattern analysis was conducted to identify activity patterns predictive of sound location and/or emotion. Relative to the prior univariate analysis, MVPA indicated larger overlapping spatial and emotional representations of sound within early secondary regions associated with auditory localization. However, consistent with the univariate analysis, these two dimensions were increasingly segregated in late secondary and tertiary regions of the auditory processing streams. These results, while complimentary to our original univariate analyses, highlight the utility of multiple analytic approaches for neuroimaging, particularly for neural processes with known representations dependent on population coding.
Collapse
Affiliation(s)
- James H Kryklywy
- Department of Psychology, University of British Columbia, Vancouver, V6T 1Z4, Canada.,Graduate Program in Neuroscience, University of Western Ontario, London, ON, N6A 5A5, Canada.,Brain and Mind Institute, University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Ewan A Macpherson
- School of Communication Sciences and Disorders, University of Western Ontario, London, ON, N6G 1H1, Canada.,National Centre for Audiology, University of Western Ontario, London, ON, N6G 1H1, Canada
| | - Derek G V Mitchell
- Graduate Program in Neuroscience, University of Western Ontario, London, ON, N6A 5A5, Canada. .,Brain and Mind Institute, University of Western Ontario, London, ON, N6A 5B7, Canada. .,Department of Anatomy and Cell Biology, University of Western Ontario, London, ON, N6A 3K7, Canada. .,Department of Psychiatry, University of Western Ontario, London, ON, N6A 5A5, Canada.
| |
Collapse
|
22
|
Diana M, Raij T, Melis M, Nummenmaa A, Leggio L, Bonci A. Rehabilitating the addicted brain with transcranial magnetic stimulation. Nat Rev Neurosci 2017; 18:685-693. [PMID: 28951609 DOI: 10.1038/nrn.2017.113] [Citation(s) in RCA: 149] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
Substance use disorders (SUDs) are one of the leading causes of morbidity and mortality worldwide. In spite of considerable advances in understanding the neural underpinnings of SUDs, therapeutic options remain limited. Recent studies have highlighted the potential of transcranial magnetic stimulation (TMS) as an innovative, safe and cost-effective treatment for some SUDs. Repetitive TMS (rTMS) influences neural activity in the short and long term by mechanisms involving neuroplasticity both locally, under the stimulating coil, and at the network level, throughout the brain. The long-term neurophysiological changes induced by rTMS have the potential to affect behaviours relating to drug craving, intake and relapse. Here, we review TMS mechanisms and evidence that rTMS is opening new avenues in addiction treatments.
Collapse
Affiliation(s)
- Marco Diana
- 'G. Minardi' Laboratory for Cognitive Neuroscience, Department of Chemistry and Pharmacy, University of Sassari, 07100 Sassari, Italy
| | - Tommi Raij
- Shirley Ryan AbilityLab, Center for Brain Stimulation, the Department of Physical Medicine and Rehabilitation and the Department of Neurobiology, Northwestern University, Chicago, Illinois 60611, USA
| | - Miriam Melis
- Department of Biomedical Sciences, Division of Neuroscience and Clinical Pharmacology, University of Cagliari, 09042 Monserrato, Italy
| | - Aapo Nummenmaa
- Massachusetts General Hospital (MGH)/Massachusetts Institute of Technology (MIT)/Harvard Medical School (HMS) Athinoula A. Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, Massachusetts 02129, USA
| | - Lorenzo Leggio
- Section on Clinical Psychoneuroendocrinology and Neuropsychopharmacology, US National Institute on Alcohol Abuse and Alcoholism Division of Intramural Clinical and Biological Research (NIAAA DICBR) and US National Institute on Drug Abuse Intramural Research Program (NIDA IRP), NIH (National Institutes of Health), Bethesda, Maryland 20892, USA; and at the Center for Alcohol and Addiction Studies, Brown University, Providence, Rhode Island 02912, USA
| | - Antonello Bonci
- US National Institute on Drug Abuse Intramural Research Program (NIDA IRP); and at the Departments of Neuroscience and Psychiatry, Johns Hopkins University, Baltimore, Maryland 21224, USA
| |
Collapse
|
23
|
Makarov SN, Noetscher GM, Yanamadala J, Piazza MW, Louie S, Prokop A, Nazarian A, Nummenmaa A. Virtual Human Models for Electromagnetic Studies and Their Applications. IEEE Rev Biomed Eng 2017; 10:95-121. [PMID: 28682265 PMCID: PMC10502908 DOI: 10.1109/rbme.2017.2722420] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/17/2023]
Abstract
Numerical simulation of electromagnetic, thermal, and mechanical responses of the human body to different stimuli in magnetic resonance imaging safety, antenna research, electromagnetic tomography, and electromagnetic stimulation is currently limited by the availability of anatomically adequate and numerically efficient cross-platform computational models or "virtual humans." The objective of this study is to provide a comprehensive review of modern human models and body region models available in the field and their important features.
Collapse
Affiliation(s)
- Sergey N. Makarov
- ECE Dept., Worcester Polytechnic Institute, Worcester, MA 01609; Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 ()
| | - Gregory M. Noetscher
- ECE Dept., Worcester Polytechnic Institute, Worcester, MA 01609; Neva Electromagnetics, LLC., Yarmouth Port, MA 02675 ()
| | | | | | | | | | - Ara Nazarian
- Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA 02675 ()
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA 02114 ()
| |
Collapse
|
24
|
Hearing Scenes: A Neuromagnetic Signature of Auditory Source and Reverberant Space Separation. eNeuro 2017; 4:eN-NWR-0007-17. [PMID: 28451630 PMCID: PMC5394928 DOI: 10.1523/eneuro.0007-17.2017] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2016] [Revised: 02/03/2017] [Accepted: 02/06/2017] [Indexed: 11/21/2022] Open
Abstract
Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.
Collapse
|
25
|
The Role of the Auditory Brainstem in Regularity Encoding and Deviance Detection. THE FREQUENCY-FOLLOWING RESPONSE 2017. [DOI: 10.1007/978-3-319-47944-6_5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
|
26
|
Seymour JL, Low KA, Maclin EL, Chiarelli AM, Mathewson KE, Fabiani M, Gratton G, Dye MW. Reorganization of neural systems mediating peripheral visual selective attention in the deaf: An optical imaging study. Hear Res 2017; 343:162-175. [PMID: 27668836 DOI: 10.1016/j.heares.2016.09.007] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/14/2016] [Revised: 09/16/2016] [Accepted: 09/19/2016] [Indexed: 10/21/2022]
|
27
|
Auditory distance perception in humans: a review of cues, development, neuronal bases, and effects of sensory loss. Atten Percept Psychophys 2016; 78:373-95. [PMID: 26590050 PMCID: PMC4744263 DOI: 10.3758/s13414-015-1015-1] [Citation(s) in RCA: 93] [Impact Index Per Article: 11.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
Auditory distance perception plays a major role in spatial awareness, enabling location of objects and avoidance of obstacles in the environment. However, it remains under-researched relative to studies of the directional aspect of sound localization. This review focuses on the following four aspects of auditory distance perception: cue processing, development, consequences of visual and auditory loss, and neurological bases. The several auditory distance cues vary in their effective ranges in peripersonal and extrapersonal space. The primary cues are sound level, reverberation, and frequency. Nonperceptual factors, including the importance of the auditory event to the listener, also can affect perceived distance. Basic internal representations of auditory distance emerge at approximately 6 months of age in humans. Although visual information plays an important role in calibrating auditory space, sensorimotor contingencies can be used for calibration when vision is unavailable. Blind individuals often manifest supranormal abilities to judge relative distance but show a deficit in absolute distance judgments. Following hearing loss, the use of auditory level as a distance cue remains robust, while the reverberation cue becomes less effective. Previous studies have not found evidence that hearing-aid processing affects perceived auditory distance. Studies investigating the brain areas involved in processing different acoustic distance cues are described. Finally, suggestions are given for further research on auditory distance perception, including broader investigation of how background noise and multiple sound sources affect perceived auditory distance for those with sensory loss.
Collapse
|
28
|
Renvall H, Staeren N, Barz CS, Ley A, Formisano E. Attention Modulates the Auditory Cortical Processing of Spatial and Category Cues in Naturalistic Auditory Scenes. Front Neurosci 2016; 10:254. [PMID: 27375416 PMCID: PMC4894904 DOI: 10.3389/fnins.2016.00254] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/23/2016] [Indexed: 11/13/2022] Open
Abstract
This combined fMRI and MEG study investigated brain activations during listening and attending to natural auditory scenes. We first recorded, using in-ear microphones, vocal non-speech sounds, and environmental sounds that were mixed to construct auditory scenes containing two concurrent sound streams. During the brain measurements, subjects attended to one of the streams while spatial acoustic information of the scene was either preserved (stereophonic sounds) or removed (monophonic sounds). Compared to monophonic sounds, stereophonic sounds evoked larger blood-oxygenation-level-dependent (BOLD) fMRI responses in the bilateral posterior superior temporal areas, independent of which stimulus attribute the subject was attending to. This finding is consistent with the functional role of these regions in the (automatic) processing of auditory spatial cues. Additionally, significant differences in the cortical activation patterns depending on the target of attention were observed. Bilateral planum temporale and inferior frontal gyrus were preferentially activated when attending to stereophonic environmental sounds, whereas when subjects attended to stereophonic voice sounds, the BOLD responses were larger at the bilateral middle superior temporal gyrus and sulcus, previously reported to show voice sensitivity. In contrast, the time-resolved MEG responses were stronger for mono- than stereophonic sounds in the bilateral auditory cortices at ~360 ms after the stimulus onset when attending to the voice excerpts within the combined sounds. The observed effects suggest that during the segregation of auditory objects from the auditory background, spatial sound cues together with other relevant temporal and spectral cues are processed in an attention-dependent manner at the cortical locations generally involved in sound recognition. More synchronous neuronal activation during monophonic than stereophonic sound processing, as well as (local) neuronal inhibitory mechanisms in the auditory cortex, may explain the simultaneous increase of BOLD responses and decrease of MEG responses. These findings highlight the complimentary role of electrophysiological and hemodynamic measures in addressing brain processing of complex stimuli.
Collapse
Affiliation(s)
- Hanna Renvall
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Department of Neuroscience and Biomedical Engineering, Aalto University School of ScienceEspoo, Finland; Aalto Neuroimaging, Magnetoencephalography (MEG) Core, Aalto UniversityEspoo, Finland
| | - Noël Staeren
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Claudia S Barz
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Institute for Neuroscience and Medicine, Research Centre JuelichJuelich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen UniversityAachen, Germany
| | - Anke Ley
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University Maastricht, Netherlands
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht UniversityMaastricht, Netherlands; Maastricht Center for Systems Biology (MaCSBio), Maastricht UniversityMaastricht, Netherlands
| |
Collapse
|
29
|
Abstract
Recognizing that electrically stimulating the motor cortex could relieve chronic pain sparked development of noninvasive technologies. In transcranial magnetic stimulation (TMS), electromagnetic coils held against the scalp influence underlying cortical firing. Multiday repetitive transcranial magnetic stimulation (rTMS) can induce long-lasting, potentially therapeutic brain plasticity. Nearby ferromagnetic or electronic implants are contraindications. Adverse effects are minimal, primarily headaches. Single provoked seizures are very rare. Transcranial magnetic stimulation devices are marketed for depression and migraine in the United States and for various indications elsewhere. Although multiple studies report that high-frequency rTMS of the motor cortex reduces neuropathic pain, their quality has been insufficient to support Food and Drug Administration application. Harvard's Radcliffe Institute therefore sponsored a workshop to solicit advice from experts in TMS, pain research, and clinical trials. They recommended that researchers standardize and document all TMS parameters and improve strategies for sham and double blinding. Subjects should have common well-characterized pain conditions amenable to motor cortex rTMS and studies should be adequately powered. They recommended standardized assessment tools (eg, NIH's PROMIS) plus validated condition-specific instruments and consensus-recommended metrics (eg, IMMPACT). Outcomes should include pain intensity and qualities, patient and clinician impression of change, and proportions achieving 30% and 50% pain relief. Secondary outcomes could include function, mood, sleep, and/or quality of life. Minimum required elements include sample sources, sizes, and demographics, recruitment methods, inclusion and exclusion criteria, baseline and posttreatment means and SD, adverse effects, safety concerns, discontinuations, and medication-usage records. Outcomes should be monitored for at least 3 months after initiation with prespecified statistical analyses. Multigroup collaborations or registry studies may be needed for pivotal trials.
Collapse
|
30
|
Selective memory retrieval of auditory what and auditory where involves the ventrolateral prefrontal cortex. Proc Natl Acad Sci U S A 2016; 113:1919-24. [PMID: 26831102 DOI: 10.1073/pnas.1520432113] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023] Open
Abstract
There is evidence from the visual, verbal, and tactile memory domains that the midventrolateral prefrontal cortex plays a critical role in the top-down modulation of activity within posterior cortical areas for the selective retrieval of specific aspects of a memorized experience, a functional process often referred to as active controlled retrieval. In the present functional neuroimaging study, we explore the neural bases of active retrieval for auditory nonverbal information, about which almost nothing is known. Human participants were scanned with functional magnetic resonance imaging (fMRI) in a task in which they were presented with short melodies from different locations in a simulated virtual acoustic environment within the scanner and were then instructed to retrieve selectively either the particular melody presented or its location. There were significant activity increases specifically within the midventrolateral prefrontal region during the selective retrieval of nonverbal auditory information. During the selective retrieval of information from auditory memory, the right midventrolateral prefrontal region increased its interaction with the auditory temporal region and the inferior parietal lobule in the right hemisphere. These findings provide evidence that the midventrolateral prefrontal cortical region interacts with specific posterior cortical areas in the human cerebral cortex for the selective retrieval of object and location features of an auditory memory experience.
Collapse
|
31
|
Alho J, Green BM, May PJC, Sams M, Tiitinen H, Rauschecker JP, Jääskeläinen IP. Early-latency categorical speech sound representations in the left inferior frontal gyrus. Neuroimage 2016; 129:214-223. [PMID: 26774614 DOI: 10.1016/j.neuroimage.2016.01.016] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 12/17/2015] [Accepted: 01/06/2016] [Indexed: 11/30/2022] Open
Abstract
Efficient speech perception requires the mapping of highly variable acoustic signals to distinct phonetic categories. How the brain overcomes this many-to-one mapping problem has remained unresolved. To infer the cortical location, latency, and dependency on attention of categorical speech sound representations in the human brain, we measured stimulus-specific adaptation of neuromagnetic responses to sounds from a phonetic continuum. The participants attended to the sounds while performing a non-phonetic listening task and, in a separate recording condition, ignored the sounds while watching a silent film. Neural adaptation indicative of phoneme category selectivity was found only during the attentive condition in the pars opercularis (POp) of the left inferior frontal gyrus, where the degree of selectivity correlated with the ability of the participants to categorize the phonetic stimuli. Importantly, these category-specific representations were activated at an early latency of 115-140 ms, which is compatible with the speed of perceptual phonetic categorization. Further, concurrent functional connectivity was observed between POp and posterior auditory cortical areas. These novel findings suggest that when humans attend to speech, the left POp mediates phonetic categorization through integration of auditory and motor information via the dorsal auditory stream.
Collapse
Affiliation(s)
- Jussi Alho
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland.
| | - Brannon M Green
- Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA
| | - Patrick J C May
- Special Laboratory Non-Invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, D-39118 Magdeburg, Germany
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Hannu Tiitinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland
| | - Josef P Rauschecker
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; Laboratory of Integrated Neuroscience and Cognition, Interdisciplinary Program in Neuroscience, Georgetown University Medical Center, Washington, DC, 20057, USA; Institute for Advanced Study, TUM, Munich-Garching, 80333 Munich, Germany
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering (NBE), School of Science, Aalto University, 00076, AALTO, Espoo, Finland; MEG Core, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland; AMI Centre, Aalto NeuroImaging, Aalto University, 00076, AALTO, Espoo, Finland.
| |
Collapse
|
32
|
Ahveninen J, Huang S, Ahlfors SP, Hämäläinen M, Rossi S, Sams M, Jääskeläinen IP. Interacting parallel pathways associate sounds with visual identity in auditory cortices. Neuroimage 2015; 124:858-868. [PMID: 26419388 DOI: 10.1016/j.neuroimage.2015.09.044] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2015] [Revised: 08/26/2015] [Accepted: 09/20/2015] [Indexed: 10/23/2022] Open
Abstract
Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA.
| | - Samantha Huang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA; Harvard-MIT Division of Health Sciences and Technology, Cambridge, MA, USA; Department of Neuroscience and Biomedical Engineering, Aalto University, School of Science, Espoo, Finland
| | - Stephanie Rossi
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital/Harvard Medical School, Charlestown, MA, USA
| | - Mikko Sams
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| |
Collapse
|
33
|
Zündorf IC, Lewald J, Karnath HO. Testing the dual-pathway model for auditory processing in human cortex. Neuroimage 2015; 124:672-681. [PMID: 26388552 DOI: 10.1016/j.neuroimage.2015.09.026] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Revised: 09/09/2015] [Accepted: 09/10/2015] [Indexed: 11/16/2022] Open
Abstract
Analogous to the visual system, auditory information has been proposed to be processed in two largely segregated streams: an anteroventral ("what") pathway mainly subserving sound identification and a posterodorsal ("where") stream mainly subserving sound localization. Despite the popularity of this assumption, the degree of separation of spatial and non-spatial auditory information processing in cortex is still under discussion. In the present study, a statistical approach was implemented to investigate potential behavioral dissociations for spatial and non-spatial auditory processing in stroke patients, and voxel-wise lesion analyses were used to uncover their neural correlates. The results generally provided support for anatomically and functionally segregated auditory networks. However, some degree of anatomo-functional overlap between "what" and "where" aspects of processing was found in the superior pars opercularis of right inferior frontal gyrus (Brodmann area 44), suggesting the potential existence of a shared target area of both auditory streams in this region. Moreover, beyond the typically defined posterodorsal stream (i.e., posterior superior temporal gyrus, inferior parietal lobule, and superior frontal sulcus), occipital lesions were found to be associated with sound localization deficits. These results, indicating anatomically and functionally complex cortical networks for spatial and non-spatial auditory processing, are roughly consistent with the dual-pathway model of auditory processing in its original form, but argue for the need to refine and extend this widely accepted hypothesis.
Collapse
Affiliation(s)
- Ida C Zündorf
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany
| | - Jörg Lewald
- Department of Cognitive Psychology, Institute of Cognitive Neuroscience, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany; Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Hans-Otto Karnath
- Center of Neurology, Division of Neuropsychology, Hertie Institute for Clinical Brain Research, University of Tübingen, Tübingen, Germany; Department of Psychology, University of South Carolina, Columbia, SC 29208, USA.
| |
Collapse
|
34
|
Abstract
It is well understood that the brain integrates information that is provided to our different senses to generate a coherent multisensory percept of the world around us (Stein and Stanford, 2008), but how does the brain handle concurrent sensory information from our mind and the external world? Recent behavioral experiments have found that mental imagery--the internal representation of sensory stimuli in one's mind--can also lead to integrated multisensory perception (Berger and Ehrsson, 2013); however, the neural mechanisms of this process have not yet been explored. Here, using functional magnetic resonance imaging and an adapted version of a well known multisensory illusion (i.e., the ventriloquist illusion; Howard and Templeton, 1966), we investigated the neural basis of mental imagery-induced multisensory perception in humans. We found that simultaneous visual mental imagery and auditory stimulation led to an illusory translocation of auditory stimuli and was associated with increased activity in the left superior temporal sulcus (L. STS), a key site for the integration of real audiovisual stimuli (Beauchamp et al., 2004a, 2010; Driver and Noesselt, 2008; Ghazanfar et al., 2008; Dahl et al., 2009). This imagery-induced ventriloquist illusion was also associated with increased effective connectivity between the L. STS and the auditory cortex. These findings suggest an important role of the temporal association cortex in integrating imagined visual stimuli with real auditory stimuli, and further suggest that connectivity between the STS and auditory cortex plays a modulatory role in spatially localizing auditory stimuli in the presence of imagined visual stimuli.
Collapse
|
35
|
Ding H, Qin W, Liang M, Ming D, Wan B, Li Q, Yu C. Cross-modal activation of auditory regions during visuo-spatial working memory in early deafness. Brain 2015; 138:2750-65. [PMID: 26070981 DOI: 10.1093/brain/awv165] [Citation(s) in RCA: 57] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/15/2014] [Accepted: 04/18/2015] [Indexed: 11/13/2022] Open
Abstract
Early deafness can reshape deprived auditory regions to enable the processing of signals from the remaining intact sensory modalities. Cross-modal activation has been observed in auditory regions during non-auditory tasks in early deaf subjects. In hearing subjects, visual working memory can evoke activation of the visual cortex, which further contributes to behavioural performance. In early deaf subjects, however, whether and how auditory regions participate in visual working memory remains unclear. We hypothesized that auditory regions may be involved in visual working memory processing and activation of auditory regions may contribute to the superior behavioural performance of early deaf subjects. In this study, 41 early deaf subjects (22 females and 19 males, age range: 20-26 years, age of onset of deafness < 2 years) and 40 age- and gender-matched hearing controls underwent functional magnetic resonance imaging during a visuo-spatial delayed recognition task that consisted of encoding, maintenance and recognition stages. The early deaf subjects exhibited faster reaction times on the spatial working memory task than did the hearing controls. Compared with hearing controls, deaf subjects exhibited increased activation in the superior temporal gyrus bilaterally during the recognition stage. This increased activation amplitude predicted faster and more accurate working memory performance in deaf subjects. Deaf subjects also had increased activation in the superior temporal gyrus bilaterally during the maintenance stage and in the right superior temporal gyrus during the encoding stage. These increased activation amplitude also predicted faster reaction times on the spatial working memory task in deaf subjects. These findings suggest that cross-modal plasticity occurs in auditory association areas in early deaf subjects. These areas are involved in visuo-spatial working memory. Furthermore, amplitudes of cross-modal activation during the maintenance stage were positively correlated with the age of onset of hearing aid use and were negatively correlated with the percentage of lifetime hearing aid use in deaf subjects. These findings suggest that earlier and longer hearing aid use may inhibit cross-modal reorganization in early deaf subjects. Granger causality analysis revealed that, compared to the hearing controls, the deaf subjects had an enhanced net causal flow from the frontal eye field to the superior temporal gyrus. These findings indicate that a top-down mechanism may better account for the cross-modal activation of auditory regions in early deaf subjects.See MacSweeney and Cardin (doi:10/1093/awv197) for a scientific commentary on this article.
Collapse
Affiliation(s)
- Hao Ding
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Wen Qin
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| | - Meng Liang
- 3 School of Medical Imaging, Tianjin Medical University, Tianjin 300070, People's Republic of China
| | - Dong Ming
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Baikun Wan
- 1 Department of Biomedical Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Qiang Li
- 4 Technical College for the Deaf, Tianjin University of Technology, Tianjin 300384, People's Republic of China
| | - Chunshui Yu
- 2 Department of Radiology and Tianjin Key Laboratory of Functional Imaging, Tianjin Medical University General Hospital, Tianjin 300052, People's Republic of China
| |
Collapse
|
36
|
Salminen NH, Takanen M, Santala O, Alku P, Pulkki V. Neural realignment of spatially separated sound components. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:3356-3365. [PMID: 26093425 DOI: 10.1121/1.4921605] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Natural auditory scenes often consist of several sound sources overlapping in time, but separated in space. Yet, location is not fully exploited in auditory grouping: spatially separated sounds can get perceptually fused into a single auditory object and this leads to difficulties in the identification and localization of concurrent sounds. Here, the brain mechanisms responsible for grouping across spatial locations were explored in magnetoencephalography (MEG) recordings. The results show that the cortical representation of a vowel spatially separated into two locations reflects the perceived location of the speech sound rather than the physical locations of the individual components. In other words, the auditory scene is neurally rearranged to bring components into spatial alignment when they were deemed to belong to the same object. This renders the original spatial information unavailable at the level of the auditory cortex and may contribute to difficulties in concurrent sound segregation.
Collapse
Affiliation(s)
- Nelli H Salminen
- Brain and Mind Laboratory, Department of Biomedical Engineering and Computational Science, Aalto University School of Science, P.O. Box 12200, Aalto, FI-00076, Finland
| | - Marko Takanen
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Olli Santala
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Paavo Alku
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| | - Ville Pulkki
- Department of Signal Processing and Acoustics, Aalto University School of Electrical Engineering, P.O. Box 13000, Aalto, FI-00076, Finland
| |
Collapse
|
37
|
Zhang X, Zhang Q, Hu X, Zhang B. Neural representation of three-dimensional acoustic space in the human temporal lobe. Front Hum Neurosci 2015; 9:203. [PMID: 25932011 PMCID: PMC4399328 DOI: 10.3389/fnhum.2015.00203] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2015] [Accepted: 03/27/2015] [Indexed: 11/13/2022] Open
Abstract
Sound localization is an important function of the human brain, but the underlying cortical mechanisms remain unclear. In this study, we recorded auditory stimuli in three-dimensional space and then replayed the stimuli through earphones during functional magnetic resonance imaging (fMRI). By employing a machine learning algorithm, we successfully decoded sound location from the blood oxygenation level-dependent signals in the temporal lobe. Analysis of the data revealed that different cortical patterns were evoked by sounds from different locations. Specifically, discrimination of sound location along the abscissa axis evoked robust responses in the left posterior superior temporal gyrus (STG) and right mid-STG, discrimination along the elevation (EL) axis evoked robust responses in the left posterior middle temporal lobe (MTL) and right STG, and discrimination along the ordinate axis evoked robust responses in the left mid-MTL and right mid-STG. These results support a distributed representation of acoustic space in human cortex.
Collapse
Affiliation(s)
- Xiaolu Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Qingtian Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China
| | - Xiaolin Hu
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| | - Bo Zhang
- State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology (TNList), Department of Computer Science and Technology, Tsinghua University Beijing, China ; Center for Brain-Inspired Computing Research (CBICR), Tsinghua University Beijing, China
| |
Collapse
|
38
|
Engineer CT, Rahebi KC, Buell EP, Fink MK, Kilgard MP. Speech training alters consonant and vowel responses in multiple auditory cortex fields. Behav Brain Res 2015; 287:256-64. [PMID: 25827927 DOI: 10.1016/j.bbr.2015.03.044] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2014] [Revised: 03/19/2015] [Accepted: 03/22/2015] [Indexed: 10/23/2022]
Abstract
Speech sounds evoke unique neural activity patterns in primary auditory cortex (A1). Extensive speech sound discrimination training alters A1 responses. While the neighboring auditory cortical fields each contain information about speech sound identity, each field processes speech sounds differently. We hypothesized that while all fields would exhibit training-induced plasticity following speech training, there would be unique differences in how each field changes. In this study, rats were trained to discriminate speech sounds by consonant or vowel in quiet and in varying levels of background speech-shaped noise. Local field potential and multiunit responses were recorded from four auditory cortex fields in rats that had received 10 weeks of speech discrimination training. Our results reveal that training alters speech evoked responses in each of the auditory fields tested. The neural response to consonants was significantly stronger in anterior auditory field (AAF) and A1 following speech training. The neural response to vowels following speech training was significantly weaker in ventral auditory field (VAF) and posterior auditory field (PAF). This differential plasticity of consonant and vowel sound responses may result from the greater paired pulse depression, expanded low frequency tuning, reduced frequency selectivity, and lower tone thresholds, which occurred across the four auditory fields. These findings suggest that alterations in the distributed processing of behaviorally relevant sounds may contribute to robust speech discrimination.
Collapse
Affiliation(s)
- Crystal T Engineer
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States.
| | - Kimiya C Rahebi
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Elizabeth P Buell
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Melyssa K Fink
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| | - Michael P Kilgard
- School of Behavioral and Brain Sciences, The University of Texas at Dallas, 800 West Campbell Road, GR41, Richardson, TX 75080, United States
| |
Collapse
|
39
|
Viaud-Delmon I, Warusfel O. From ear to body: the auditory-motor loop in spatial cognition. Front Neurosci 2014; 8:283. [PMID: 25249933 PMCID: PMC4155796 DOI: 10.3389/fnins.2014.00283] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2014] [Accepted: 08/19/2014] [Indexed: 11/30/2022] Open
Abstract
Spatial memory is mainly studied through the visual sensory modality: navigation tasks in humans rarely integrate dynamic and spatial auditory information. In order to study how a spatial scene can be memorized on the basis of auditory and idiothetic cues only, we constructed an auditory equivalent of the Morris water maze, a task widely used to assess spatial learning and memory in rodents. Participants were equipped with wireless headphones, which delivered a soundscape updated in real time according to their movements in 3D space. A wireless tracking system (video infrared with passive markers) was used to send the coordinates of the subject's head to the sound rendering system. The rendering system used advanced HRTF-based synthesis of directional cues and room acoustic simulation for the auralization of a realistic acoustic environment. Participants were guided blindfolded in an experimental room. Their task was to explore a delimitated area in order to find a hidden auditory target, i.e., a sound that was only triggered when walking on a precise location of the area. The position of this target could be coded in relationship to auditory landmarks constantly rendered during the exploration of the area. The task was composed of a practice trial, 6 acquisition trials during which they had to memorize the localization of the target, and 4 test trials in which some aspects of the auditory scene were modified. The task ended with a probe trial in which the auditory target was removed. The configuration of searching paths allowed observing how auditory information was coded to memorize the position of the target. They suggested that space can be efficiently coded without visual information in normal sighted subjects. In conclusion, space representation can be based on sensorimotor and auditory cues only, providing another argument in favor of the hypothesis that the brain has access to a modality-invariant representation of external space.
Collapse
Affiliation(s)
- Isabelle Viaud-Delmon
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| | - Olivier Warusfel
- CNRS, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Institut de Recherche et Coordination Acoustique/Musique, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France ; Sorbonne Universités, Université Pierre et Marie Curie, UMR 9912, Sciences et Technologies de la Musique et du Son Paris, France
| |
Collapse
|
40
|
Plakke B, Romanski LM. Auditory connections and functions of prefrontal cortex. Front Neurosci 2014; 8:199. [PMID: 25100931 PMCID: PMC4107948 DOI: 10.3389/fnins.2014.00199] [Citation(s) in RCA: 85] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2014] [Accepted: 06/26/2014] [Indexed: 12/17/2022] Open
Abstract
The functional auditory system extends from the ears to the frontal lobes with successively more complex functions occurring as one ascends the hierarchy of the nervous system. Several areas of the frontal lobe receive afferents from both early and late auditory processing regions within the temporal lobe. Afferents from the early part of the cortical auditory system, the auditory belt cortex, which are presumed to carry information regarding auditory features of sounds, project to only a few prefrontal regions and are most dense in the ventrolateral prefrontal cortex (VLPFC). In contrast, projections from the parabelt and the rostral superior temporal gyrus (STG) most likely convey more complex information and target a larger, widespread region of the prefrontal cortex. Neuronal responses reflect these anatomical projections as some prefrontal neurons exhibit responses to features in acoustic stimuli, while other neurons display task-related responses. For example, recording studies in non-human primates indicate that VLPFC is responsive to complex sounds including vocalizations and that VLPFC neurons in area 12/47 respond to sounds with similar acoustic morphology. In contrast, neuronal responses during auditory working memory involve a wider region of the prefrontal cortex. In humans, the frontal lobe is involved in auditory detection, discrimination, and working memory. Past research suggests that dorsal and ventral subregions of the prefrontal cortex process different types of information with dorsal cortex processing spatial/visual information and ventral cortex processing non-spatial/auditory information. While this is apparent in the non-human primate and in some neuroimaging studies, most research in humans indicates that specific task conditions, stimuli or previous experience may bias the recruitment of specific prefrontal regions, suggesting a more flexible role for the frontal lobe during auditory cognition.
Collapse
Affiliation(s)
- Bethany Plakke
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry Rochester, NY, USA
| | - Lizabeth M Romanski
- Department of Neurobiology and Anatomy, University of Rochester School of Medicine and Dentistry Rochester, NY, USA
| |
Collapse
|
41
|
Ross B, Miyazaki T, Thompson J, Jamali S, Fujioka T. Human cortical responses to slow and fast binaural beats reveal multiple mechanisms of binaural hearing. J Neurophysiol 2014; 112:1871-84. [PMID: 25008412 DOI: 10.1152/jn.00224.2014] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023] Open
Abstract
When two tones with slightly different frequencies are presented to both ears, they interact in the central auditory system and induce the sensation of a beating sound. At low difference frequencies, we perceive a single sound, which is moving across the head between the left and right ears. The percept changes to loudness fluctuation, roughness, and pitch with increasing beat rate. To examine the neural representations underlying these different perceptions, we recorded neuromagnetic cortical responses while participants listened to binaural beats at a continuously varying rate between 3 Hz and 60 Hz. Binaural beat responses were analyzed as neuromagnetic oscillations following the trajectory of the stimulus rate. Responses were largest in the 40-Hz gamma range and at low frequencies. Binaural beat responses at 3 Hz showed opposite polarity in the left and right auditory cortices. We suggest that this difference in polarity reflects the opponent neural population code for representing sound location. Binaural beats at any rate induced gamma oscillations. However, the responses were largest at 40-Hz stimulation. We propose that the neuromagnetic gamma oscillations reflect postsynaptic modulation that allows for precise timing of cortical neural firing. Systematic phase differences between bilateral responses suggest that separate sound representations of a sound object exist in the left and right auditory cortices. We conclude that binaural processing at the cortical level occurs with the same temporal acuity as monaural processing whereas the identification of sound location requires further interpretation and is limited by the rate of object representations.
Collapse
Affiliation(s)
- Bernhard Ross
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada; Department of Medical Biophysics, University of Toronto, Toronto, Ontario, Canada;
| | - Takahiro Miyazaki
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Jessica Thompson
- International Laboratory for Brain, Music and Sound Research, Department of Psychology, University of Montreal, Montreal, Quebec, Canada; and
| | - Shahab Jamali
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Takako Fujioka
- Center for Computer Research in Music and Acoustics, Stanford University, Stanford, California
| |
Collapse
|
42
|
Degraded speech sound processing in a rat model of fragile X syndrome. Brain Res 2014; 1564:72-84. [PMID: 24713347 DOI: 10.1016/j.brainres.2014.03.049] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2014] [Revised: 03/29/2014] [Accepted: 03/31/2014] [Indexed: 12/29/2022]
Abstract
Fragile X syndrome is the most common inherited form of intellectual disability and the leading genetic cause of autism. Impaired phonological processing in fragile X syndrome interferes with the development of language skills. Although auditory cortex responses are known to be abnormal in fragile X syndrome, it is not clear how these differences impact speech sound processing. This study provides the first evidence that the cortical representation of speech sounds is impaired in Fmr1 knockout rats, despite normal speech discrimination behavior. Evoked potentials and spiking activity in response to speech sounds, noise burst trains, and tones were significantly degraded in primary auditory cortex, anterior auditory field and the ventral auditory field. Neurometric analysis of speech evoked activity using a pattern classifier confirmed that activity in these fields contains significantly less information about speech sound identity in Fmr1 knockout rats compared to control rats. Responses were normal in the posterior auditory field, which is associated with sound localization. The greatest impairment was observed in the ventral auditory field, which is related to emotional regulation. Dysfunction in the ventral auditory field may contribute to poor emotional regulation in fragile X syndrome and may help explain the observation that later auditory evoked responses are more disturbed in fragile X syndrome compared to earlier responses. Rodent models of fragile X syndrome are likely to prove useful for understanding the biological basis of fragile X syndrome and for testing candidate therapies.
Collapse
|
43
|
Schall S, von Kriegstein K. Functional connectivity between face-movement and speech-intelligibility areas during auditory-only speech perception. PLoS One 2014; 9:e86325. [PMID: 24466026 PMCID: PMC3900530 DOI: 10.1371/journal.pone.0086325] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2013] [Accepted: 12/06/2013] [Indexed: 11/29/2022] Open
Abstract
It has been proposed that internal simulation of the talking face of visually-known speakers facilitates auditory speech recognition. One prediction of this view is that brain areas involved in auditory-only speech comprehension interact with visual face-movement sensitive areas, even under auditory-only listening conditions. Here, we test this hypothesis using connectivity analyses of functional magnetic resonance imaging (fMRI) data. Participants (17 normal participants, 17 developmental prosopagnosics) first learned six speakers via brief voice-face or voice-occupation training (<2 min/speaker). This was followed by an auditory-only speech recognition task and a control task (voice recognition) involving the learned speakers’ voices in the MRI scanner. As hypothesized, we found that, during speech recognition, familiarity with the speaker’s face increased the functional connectivity between the face-movement sensitive posterior superior temporal sulcus (STS) and an anterior STS region that supports auditory speech intelligibility. There was no difference between normal participants and prosopagnosics. This was expected because previous findings have shown that both groups use the face-movement sensitive STS to optimize auditory-only speech comprehension. Overall, the present findings indicate that learned visual information is integrated into the analysis of auditory-only speech and that this integration results from the interaction of task-relevant face-movement and auditory speech-sensitive areas.
Collapse
Affiliation(s)
- Sonja Schall
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- * E-mail:
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
44
|
Auditory-cortex short-term plasticity induced by selective attention. Neural Plast 2014; 2014:216731. [PMID: 24551458 PMCID: PMC3914570 DOI: 10.1155/2014/216731] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2013] [Accepted: 12/15/2013] [Indexed: 11/23/2022] Open
Abstract
The ability to concentrate on relevant sounds in the acoustic environment is crucial for everyday function and communication. Converging lines of evidence suggests that transient functional changes in auditory-cortex neurons, “short-term plasticity”, might explain this fundamental function. Under conditions of strongly focused attention, enhanced processing of attended sounds can take place at very early latencies (~50 ms from sound onset) in primary auditory cortex and possibly even at earlier latencies in subcortical structures. More robust selective-attention short-term plasticity is manifested as modulation of responses peaking at ~100 ms from sound onset in functionally specialized nonprimary auditory-cortical areas by way of stimulus-specific reshaping of neuronal receptive fields that supports filtering of selectively attended sound features from task-irrelevant ones. Such effects have been shown to take effect in ~seconds following shifting of attentional focus. There are findings suggesting that the reshaping of neuronal receptive fields is even stronger at longer auditory-cortex response latencies (~300 ms from sound onset). These longer-latency short-term plasticity effects seem to build up more gradually, within tens of seconds after shifting the focus of attention. Importantly, some of the auditory-cortical short-term plasticity effects observed during selective attention predict enhancements in behaviorally measured sound discrimination performance.
Collapse
|