1
|
Petersen SE, Seitzman BA, Nelson SM, Wig GS, Gordon EM. Principles of cortical areas and their implications for neuroimaging. Neuron 2024; 112:2837-2853. [PMID: 38834069 DOI: 10.1016/j.neuron.2024.05.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 04/11/2024] [Accepted: 05/08/2024] [Indexed: 06/06/2024]
Abstract
Cortical organization should constrain the study of how the brain performs behavior and cognition. A fundamental concept in cortical organization is that of arealization: that the cortex is parceled into discrete areas. In part one of this report, we review how non-human animal studies have illuminated principles of cortical arealization by revealing: (1) what defines a cortical area, (2) how cortical areas are formed, (3) how cortical areas interact with one another, and (4) what "computations" or "functions" areas perform. In part two, we discuss how these principles apply to neuroimaging research. In doing so, we highlight several examples where the commonly accepted interpretation of neuroimaging observations requires assumptions that violate the principles of arealization, including nonstationary areas that move on short time scales, large-scale gradients as organizing features, and cortical areas with singular functionality that perfectly map psychological constructs. Our belief is that principles of neurobiology should strongly guide the nature of computational explanations.
Collapse
Affiliation(s)
- Steven E Petersen
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Neurology, Washington University School of Medicine, St. Louis, MO 63110, USA; Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA; Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO 63130, USA; Department of Pediatrics, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Benjamin A Seitzman
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| | - Steven M Nelson
- Department of Pediatrics, University of Minnesota Medical School, Minneapolis, MN 55455, USA; Masonic Institute for the Developing Brain, University of Minnesota, Minneapolis, MN 55455, USA
| | - Gagan S Wig
- Center for Vital Longevity, School of Behavioral and Brain Sciences, University of Texas at Dallas, Dallas, TX 75235, USA; Department of Psychiatry, University of Texas Southwestern Medical Center, Dallas, TX 75390, USA
| | - Evan M Gordon
- Mallinckrodt Institute of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| |
Collapse
|
2
|
Hakonen M, Dahmani L, Lankinen K, Ren J, Barbaro J, Blazejewska A, Cui W, Kotlarz P, Li M, Polimeni JR, Turpin T, Uluç I, Wang D, Liu H, Ahveninen J. Individual connectivity-based parcellations reflect functional properties of human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2024.01.20.576475. [PMID: 38293021 PMCID: PMC10827228 DOI: 10.1101/2024.01.20.576475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2024]
Abstract
Neuroimaging studies of the functional organization of human auditory cortex have focused on group-level analyses to identify tendencies that represent the typical brain. Here, we mapped auditory areas of the human superior temporal cortex (STC) in 30 participants by combining functional network analysis and 1-mm isotropic resolution 7T functional magnetic resonance imaging (fMRI). Two resting-state fMRI sessions, and one or two auditory and audiovisual speech localizer sessions, were collected on 3-4 separate days. We generated a set of functional network-based parcellations from these data. Solutions with 4, 6, and 11 networks were selected for closer examination based on local maxima of Dice and Silhouette values. The resulting parcellation of auditory cortices showed high intraindividual reproducibility both between resting state sessions (Dice coefficient: 69-78%) and between resting state and task sessions (Dice coefficient: 62-73%). This demonstrates that auditory areas in STC can be reliably segmented into functional subareas. The interindividual variability was significantly larger than intraindividual variability (Dice coefficient: 57%-68%, p<0.001), indicating that the parcellations also captured meaningful interindividual variability. The individual-specific parcellations yielded the highest alignment with task response topographies, suggesting that individual variability in parcellations reflects individual variability in auditory function. Connectional homogeneity within networks was also highest for the individual-specific parcellations. Furthermore, the similarity in the functional parcellations was not explainable by the similarity of macroanatomical properties of auditory cortex. Our findings suggest that individual-level parcellations capture meaningful idiosyncrasies in auditory cortex organization.
Collapse
Affiliation(s)
- M Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - L Dahmani
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - K Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - J Ren
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J Barbaro
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - A Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - W Cui
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - P Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - M Li
- Division of Brain Sciences, Changping Laboratory, Beijing, China
| | - J R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Harvard-MIT Program in Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - T Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
| | - I Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - D Wang
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - H Liu
- Division of Brain Sciences, Changping Laboratory, Beijing, China
- Biomedical Pioneering Innovation Center (BIOPIC), Peking University, Beijing, China
| | - J Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
3
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. SCIENCE ADVANCES 2024; 10:eadk0010. [PMID: 38363839 PMCID: PMC10871532 DOI: 10.1126/sciadv.adk0010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 01/17/2024] [Indexed: 02/18/2024]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception varies along several pitch-based dimensions: (i) the absolute pitch of notes, (ii) the difference in pitch between successive notes, and (iii) the statistical expectation of each note given prior context. How the brain represents these dimensions and whether their encoding is specialized for music remains unknown. We recorded high-density neurophysiological activity directly from the human auditory cortex while participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial map for representing distinct melodic dimensions. The same participants listened to spoken English, and we compared responses to music and speech. Cortical sites selective for music encoded expectation, while sites that encoded pitch and pitch-change in music used the same neural code to represent equivalent properties of speech. Findings reveal how the perception of melody recruits both music-specific and general-purpose sound representations.
Collapse
Affiliation(s)
- Narayan Sankaran
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Matthew K. Leonard
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Frederic Theunissen
- Department of Psychology, University of California, Berkeley, 2121 Berkeley Way, Berkeley, CA 94720, USA
| | - Edward F. Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| |
Collapse
|
4
|
Sankaran N, Leonard MK, Theunissen F, Chang EF. Encoding of melody in the human auditory cortex. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.17.562771. [PMID: 37905047 PMCID: PMC10614915 DOI: 10.1101/2023.10.17.562771] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2023]
Abstract
Melody is a core component of music in which discrete pitches are serially arranged to convey emotion and meaning. Perception of melody varies along several pitch-based dimensions: (1) the absolute pitch of notes, (2) the difference in pitch between successive notes, and (3) the higher-order statistical expectation of each note conditioned on its prior context. While humans readily perceive melody, how these dimensions are collectively represented in the brain and whether their encoding is specialized for music remains unknown. Here, we recorded high-density neurophysiological activity directly from the surface of human auditory cortex while Western participants listened to Western musical phrases. Pitch, pitch-change, and expectation were selectively encoded at different cortical sites, indicating a spatial code for representing distinct dimensions of melody. The same participants listened to spoken English, and we compared evoked responses to music and speech. Cortical sites selective for music were systematically driven by the encoding of expectation. In contrast, sites that encoded pitch and pitch-change used the same neural code to represent equivalent properties of speech. These findings reveal the multidimensional nature of melody encoding, consisting of both music-specific and domain-general sound representations in auditory cortex. Teaser The human brain contains both general-purpose and music-specific neural populations for processing distinct attributes of melody.
Collapse
|
5
|
Rolls ET, Rauschecker JP, Deco G, Huang CC, Feng J. Auditory cortical connectivity in humans. Cereb Cortex 2023; 33:6207-6227. [PMID: 36573464 PMCID: PMC10422925 DOI: 10.1093/cercor/bhac496] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 11/27/2022] [Accepted: 11/29/2022] [Indexed: 12/28/2022] Open
Abstract
To understand auditory cortical processing, the effective connectivity between 15 auditory cortical regions and 360 cortical regions was measured in 171 Human Connectome Project participants, and complemented with functional connectivity and diffusion tractography. 1. A hierarchy of auditory cortical processing was identified from Core regions (including A1) to Belt regions LBelt, MBelt, and 52; then to PBelt; and then to HCP A4. 2. A4 has connectivity to anterior temporal lobe TA2, and to HCP A5, which connects to dorsal-bank superior temporal sulcus (STS) regions STGa, STSda, and STSdp. These STS regions also receive visual inputs about moving faces and objects, which are combined with auditory information to help implement multimodal object identification, such as who is speaking, and what is being said. Consistent with this being a "what" ventral auditory stream, these STS regions then have effective connectivity to TPOJ1, STV, PSL, TGv, TGd, and PGi, which are language-related semantic regions connecting to Broca's area, especially BA45. 3. A4 and A5 also have effective connectivity to MT and MST, which connect to superior parietal regions forming a dorsal auditory "where" stream involved in actions in space. Connections of PBelt, A4, and A5 with BA44 may form a language-related dorsal stream.
Collapse
Affiliation(s)
- Edmund T Rolls
- Oxford Centre for Computational Neuroscience, Oxford, UK
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Josef P Rauschecker
- Department of Neuroscience, Georgetown University Medical Center, Washington, DC 20057, USA
- Institute for Advanced Study, Technical University, Munich, Germany
| | - Gustavo Deco
- Center for Brain and Cognition, Computational Neuroscience Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra, Roc Boronat 138, Brain and Cognition, Pompeu Fabra University, Barcelona 08018, Spain
- Institució Catalana de la Recerca i Estudis Avançats (ICREA), Universitat Pompeu Fabra, Passeig Lluís Companys 23, Barcelona 08010, Spain
| | - Chu-Chung Huang
- Shanghai Key Laboratory of Brain Functional Genomics (Ministry of Education), School of Psychology and Cognitive Science, East China Normal University, Shanghai 200602, China
| | - Jianfeng Feng
- Department of Computer Science, University of Warwick, Coventry CV4 7AL, UK
- Institute of Science and Technology for Brain Inspired Intelligence, Fudan University, Shanghai 200403, China
| |
Collapse
|
6
|
Ahveninen J, Uluç I, Raij T, Nummenmaa A, Mamashli F. Spectrotemporal content of human auditory working memory represented in functional connectivity patterns. Commun Biol 2023; 6:294. [PMID: 36941477 PMCID: PMC10027691 DOI: 10.1038/s42003-023-04675-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 03/07/2023] [Indexed: 03/23/2023] Open
Abstract
Recent research suggests that working memory (WM), the mental sketchpad underlying thinking and communication, is maintained by multiple regions throughout the brain. Whether parts of a stable WM representation could be distributed across these brain regions is, however, an open question. We addressed this question by examining the content-specificity of connectivity-pattern matrices between subparts of cortical regions-of-interest (ROI). These connectivity patterns were calculated from functional MRI obtained during a ripple-sound auditory WM task. Statistical significance was assessed by comparing the decoding results to a null distribution derived from a permutation test considering all comparable two- to four-ROI connectivity patterns. Maintained WM items could be decoded from connectivity patterns across ROIs in frontal, parietal, and superior temporal cortices. All functional connectivity patterns that were specific to maintained sound content extended from early auditory to frontoparietal cortices. Our results demonstrate that WM maintenance is supported by content-specific patterns of functional connectivity across different levels of cortical hierarchy.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA.
- Department of Radiology, Harvard Medical School, Boston, MA, USA.
| | - Işıl Uluç
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Aapo Nummenmaa
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
7
|
Benner J, Reinhardt J, Christiner M, Wengenroth M, Stippich C, Schneider P, Blatow M. Temporal hierarchy of cortical responses reflects core-belt-parabelt organization of auditory cortex in musicians. Cereb Cortex 2023:7030622. [PMID: 36786655 DOI: 10.1093/cercor/bhad020] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Revised: 01/11/2023] [Accepted: 01/12/2023] [Indexed: 02/15/2023] Open
Abstract
Human auditory cortex (AC) organization resembles the core-belt-parabelt organization in nonhuman primates. Previous studies assessed mostly spatial characteristics; however, temporal aspects were little considered so far. We employed co-registration of functional magnetic resonance imaging (fMRI) and magnetoencephalography (MEG) in musicians with and without absolute pitch (AP) to achieve spatial and temporal segregation of human auditory responses. First, individual fMRI activations induced by complex harmonic tones were consistently identified in four distinct regions-of-interest within AC, namely in medial Heschl's gyrus (HG), lateral HG, anterior superior temporal gyrus (STG), and planum temporale (PT). Second, we analyzed the temporal dynamics of individual MEG responses at the location of corresponding fMRI activations. In the AP group, the auditory evoked P2 onset occurred ~25 ms earlier in the right as compared with the left PT and ~15 ms earlier in the right as compared with the left anterior STG. This effect was consistent at the individual level and correlated with AP proficiency. Based on the combined application of MEG and fMRI measurements, we were able for the first time to demonstrate a characteristic temporal hierarchy ("chronotopy") of human auditory regions in relation to specific auditory abilities, reflecting the prediction for serial processing from nonhuman studies.
Collapse
Affiliation(s)
- Jan Benner
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany
| | - Julia Reinhardt
- Department of Cardiology and Cardiovascular Research Institute Basel (CRIB), University Hospital Basel, University of Basel, Basel, Switzerland.,Department of Orthopedic Surgery and Traumatology, University Hospital Basel, University of Basel, Basel, Switzerland
| | - Markus Christiner
- Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Martina Wengenroth
- Department of Neuroradiology, University Medical Center Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Stippich
- Department of Neuroradiology and Radiology, Kliniken Schmieder, Allensbach, Germany
| | - Peter Schneider
- Department of Neuroradiology and Section of Biomagnetism, University of Heidelberg Hospital, Heidelberg, Germany.,Centre for Systematic Musicology, University of Graz, Graz, Austria.,Department of Musicology, Vitols Jazeps Latvian Academy of Music, Riga, Latvia
| | - Maria Blatow
- Section of Neuroradiology, Department of Radiology and Nuclear Medicine, Neurocenter, Cantonal Hospital Lucerne, University of Lucerne, Lucerne, Switzerland
| |
Collapse
|
8
|
Jaroszynski C, Amorim-Leite R, Deman P, Perrone-Bertolotti M, Chabert F, Job-Chapron AS, Minotti L, Hoffmann D, David O, Kahane P. Brain mapping of auditory hallucinations and illusions induced by direct intracortical electrical stimulation. Brain Stimul 2022; 15:1077-1087. [PMID: 35952963 DOI: 10.1016/j.brs.2022.08.002] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2022] [Revised: 07/28/2022] [Accepted: 08/03/2022] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND The exact architecture of the human auditory cortex remains a subject of debate, with discrepancies between functional and microstructural studies. In a hierarchical framework for sensory perception, simple sound perception is expected to take place in the primary auditory cortex, while the processing of complex, or more integrated perceptions is proposed to rely on associative and higher-order cortices. OBJECTIVES We hypothesize that auditory symptoms induced by direct electrical stimulation (DES) offer a window into the architecture of the brain networks involved in auditory hallucinations and illusions. The intracranial recordings of these evoked perceptions of varying levels of integration provide the evidence to discuss the theoretical model. METHODS We analyzed SEEG recordings from 50 epileptic patients presenting auditory symptoms induced by DES. First, using the Juelich cytoarchitectonic parcellation, we quantified which regions induced auditory symptoms when stimulated (ROI approach). Then, for each evoked auditory symptom type (illusion or hallucination), we mapped the cortical networks showing concurrent high-frequency activity modulation (HFA approach). RESULTS Although on average, illusions were found more laterally and hallucinations more posteromedially in the temporal lobe, both perceptions were elicited in all levels of the sensory hierarchy, with mixed responses found in the overlap. The spatial range was larger for illusions, both in the ROI and HFA approaches. The limbic system was specific to the hallucinations network, and the inferior parietal lobule was specific to the illusions network. DISCUSSION Our results confirm a network-based organization underlying conscious sound perception, for both simple and complex components. While symptom localization is interesting from an epilepsy semiology perspective, the hallucination-specific modulation of the limbic system is particularly relevant to tinnitus and schizophrenia.
Collapse
Affiliation(s)
- Chloé Jaroszynski
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France.
| | - Ricardo Amorim-Leite
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Pierre Deman
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Marcela Perrone-Bertolotti
- Univ. Grenoble Alpes, CNRS, UMR5105, Laboratoire Psychologie et NeuroCognition, LPNC, 38000, Grenoble, France
| | - Florian Chabert
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Anne-Sophie Job-Chapron
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Lorella Minotti
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Dominique Hoffmann
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France
| | - Olivier David
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France; Aix Marseille Univ, Inserm, INS, Institut de Neurosciences des Systèmes, Marseille, France.
| | - Philippe Kahane
- Univ. Grenoble Alpes, CHU Grenoble Alpes, Inserm, U1216, Grenoble Institut Neurosciences, GIN, 38000, Grenoble, France.
| |
Collapse
|
9
|
Sakakura K, Sonoda M, Mitsuhashi T, Kuroda N, Firestone E, O'Hara N, Iwaki H, Lee MH, Jeong JW, Rothermel R, Luat AF, Asano E. Developmental organization of neural dynamics supporting auditory perception. Neuroimage 2022; 258:119342. [PMID: 35654375 PMCID: PMC9354710 DOI: 10.1016/j.neuroimage.2022.119342] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Revised: 05/09/2022] [Accepted: 05/29/2022] [Indexed: 11/28/2022] Open
Abstract
Purpose: A prominent view of language acquisition involves learning to ignore irrelevant auditory signals through functional reorganization, enabling more efficient processing of relevant information. Yet, few studies have characterized the neural spatiotemporal dynamics supporting rapid detection and subsequent disregard of irrelevant auditory information, in the developing brain. To address this unknown, the present study modeled the developmental acquisition of cost-efficient neural dynamics for auditory processing, using intracranial electrocorticographic responses measured in individuals receiving standard-of-care treatment for drug-resistant, focal epilepsy. We also provided evidence demonstrating the maturation of an anterior-to-posterior functional division within the superior-temporal gyrus (STG), which is known to exist in the adult STG. Methods: We studied 32 patients undergoing extraoperative electrocorticography (age range: eight months to 28 years) and analyzed 2,039 intracranial electrode sites outside the seizure onset zone, interictal spike-generating areas, and MRI lesions. Patients were given forward (normal) speech sounds, backward-played speech sounds, and signal-correlated noises during a task-free condition. We then quantified sound processing-related neural costs at given time windows using high-gamma amplitude at 70–110 Hz and animated the group-level high-gamma dynamics on a spatially normalized three-dimensional brain surface. Finally, we determined if age independently contributed to high-gamma dynamics across brain regions and time windows. Results: Group-level analysis of noise-related neural costs in the STG revealed developmental enhancement of early high-gamma augmentation and diminution of delayed augmentation. Analysis of speech-related high-gamma activity demonstrated an anterior-to-posterior functional parcellation in the STG. The left anterior STG showed sustained augmentation throughout stimulus presentation, whereas the left posterior STG showed transient augmentation after stimulus onset. We found a double dissociation between the locations and developmental changes in speech sound-related high-gamma dynamics. Early left anterior STG high-gamma augmentation (i.e., within 200 ms post-stimulus onset) showed developmental enhancement, whereas delayed left posterior STG high-gamma augmentation declined with development. Conclusions: Our observations support the model that, with age, the human STG refines neural dynamics to rapidly detect and subsequently disregard uninformative acoustic noises. Our study also supports the notion that the anterior-to-posterior functional division within the left STG is gradually strengthened for efficient speech sound perception after birth.
Collapse
Affiliation(s)
- Kazuki Sakakura
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurosurgery, University of Tsukuba, Tsukuba, 3058575, Japan
| | - Masaki Sonoda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurosurgery, Yokohama City University, Yokohama, Kanagawa, 2360004, Japan
| | - Takumi Mitsuhashi
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurosurgery, Juntendo University, School of Medicine, Tokyo, 1138421, Japan
| | - Naoto Kuroda
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Ethan Firestone
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Physiology, Wayne State University, Detroit, MI 48201, USA
| | - Nolan O'Hara
- Translational Neuroscience Program, Wayne State University, Detroit, Michigan, 48201, USA
| | - Hirotaka Iwaki
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Epileptology, Tohoku University Graduate School of Medicine, Sendai, 9808575, Japan
| | - Min-Hee Lee
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA
| | - Jeong-Won Jeong
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurology, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Translational Neuroscience Program, Wayne State University, Detroit, Michigan, 48201, USA
| | - Robert Rothermel
- Department of Psychiatry, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA
| | - Aimee F Luat
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurology, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Pediatrics, Central Michigan University, Mt. Pleasant, MI 48858, USA
| | - Eishi Asano
- Department of Pediatrics, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Department of Neurology, Children's Hospital of Michigan, Detroit Medical Center, Wayne State University, Detroit, Michigan, 48201, USA.; Translational Neuroscience Program, Wayne State University, Detroit, Michigan, 48201, USA..
| |
Collapse
|
10
|
Sereno MI, Sood MR, Huang RS. Topological Maps and Brain Computations From Low to High. Front Syst Neurosci 2022; 16:787737. [PMID: 35747394 PMCID: PMC9210993 DOI: 10.3389/fnsys.2022.787737] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 03/29/2022] [Indexed: 01/02/2023] Open
Abstract
We first briefly summarize data from microelectrode studies on visual maps in non-human primates and other mammals, and characterize differences among the features of the approximately topological maps in the three main sensory modalities. We then explore the almost 50% of human neocortex that contains straightforward topological visual, auditory, and somatomotor maps by presenting a new parcellation as well as a movie atlas of cortical area maps on the FreeSurfer average surface, fsaverage. Third, we review data on moveable map phenomena as well as a recent study showing that cortical activity during sensorimotor actions may involve spatially locally coherent traveling wave and bump activity. Finally, by analogy with remapping phenomena and sensorimotor activity, we speculate briefly on the testable possibility that coherent localized spatial activity patterns might be able to ‘escape’ from topologically mapped cortex during ‘serial assembly of content’ operations such as scene and language comprehension, to form composite ‘molecular’ patterns that can move across some cortical areas and possibly return to topologically mapped cortex to generate motor output there.
Collapse
Affiliation(s)
- Martin I. Sereno
- Department of Psychology, San Diego State University, San Diego, CA, United States
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
- *Correspondence: Martin I. Sereno,
| | - Mariam Reeny Sood
- Department of Psychological Sciences, Birkbeck, University of London, London, United Kingdom
| | - Ruey-Song Huang
- Centre for Cognitive and Brain Sciences, University of Macau, Macau, Macao SAR, China
| |
Collapse
|
11
|
Pürner D, Schirkonyer V, Janssen T. Changes in the peripheral and central auditory performance in the elderly—A cross‐sectional study. J Neurosci Res 2022; 100:1791-1811. [DOI: 10.1002/jnr.25068] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Revised: 01/02/2022] [Accepted: 05/06/2022] [Indexed: 12/22/2022]
Affiliation(s)
- Dominik Pürner
- Department of Otorhinolaryngology, Experimental Audiology University hospital rechts der Isar of the Technical University of Munich Munich Germany
- Department of Neurology University hospital rechts der Isar of the Technical University of Munich Munich Germany
| | - Volker Schirkonyer
- Department of Otorhinolaryngology, Experimental Audiology University hospital rechts der Isar of the Technical University of Munich Munich Germany
| | - Thomas Janssen
- Department of Otorhinolaryngology, Experimental Audiology University hospital rechts der Isar of the Technical University of Munich Munich Germany
| |
Collapse
|
12
|
Novitskiy N, Maggu AR, Lai CM, Chan PHY, Wong KHY, Lam HS, Leung TY, Leung TF, Wong PCM. Early Development of Neural Speech Encoding Depends on Age but Not Native Language Status: Evidence From Lexical Tone. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2022; 3:67-86. [PMID: 37215329 PMCID: PMC10178623 DOI: 10.1162/nol_a_00049] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2020] [Accepted: 06/22/2021] [Indexed: 05/24/2023]
Abstract
We investigated the development of early-latency and long-latency brain responses to native and non-native speech to shed light on the neurophysiological underpinnings of perceptual narrowing and early language development. Specifically, we postulated a two-level process to explain the decrease in sensitivity to non-native phonemes toward the end of infancy. Neurons at the earlier stages of the ascending auditory pathway mature rapidly during infancy facilitating the encoding of both native and non-native sounds. This growth enables neurons at the later stages of the auditory pathway to assign phonological status to speech according to the infant's native language environment. To test this hypothesis, we collected early-latency and long-latency neural responses to native and non-native lexical tones from 85 Cantonese-learning children aged between 23 days and 24 months, 16 days. As expected, a broad range of presumably subcortical early-latency neural encoding measures grew rapidly and substantially during the first two years for both native and non-native tones. By contrast, long-latency cortical electrophysiological changes occurred on a much slower scale and showed sensitivity to nativeness at around six months. Our study provided a comprehensive understanding of early language development by revealing the complementary roles of earlier and later stages of speech processing in the developing brain.
Collapse
Affiliation(s)
- Nikolay Novitskiy
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Akshay R. Maggu
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
- O-lab, Duke Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Ching Man Lai
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Peggy H. Y. Chan
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
- Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Kay H. Y. Wong
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Hugh Simon Lam
- Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Tak Yeung Leung
- Department of Obsterics and Gynaecology, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Ting Fan Leung
- Department of Paediatrics, The Chinese University of Hong Kong, Hong Kong SAR, China
| | - Patrick C. M. Wong
- Department of Linguistics and Modern Languages, Brain and Mind Institute, The Chinese University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
13
|
Dheerendra P, Baumann S, Joly O, Balezeau F, Petkov CI, Thiele A, Griffiths TD. The Representation of Time Windows in Primate Auditory Cortex. Cereb Cortex 2021; 32:3568-3580. [PMID: 34875029 PMCID: PMC9376871 DOI: 10.1093/cercor/bhab434] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2020] [Revised: 11/04/2021] [Accepted: 11/05/2021] [Indexed: 11/13/2022] Open
Abstract
Whether human and nonhuman primates process the temporal dimension of sound similarly remains an open question. We examined the brain basis for the processing of acoustic time windows in rhesus macaques using stimuli simulating the spectrotemporal complexity of vocalizations. We conducted functional magnetic resonance imaging in awake macaques to identify the functional anatomy of response patterns to different time windows. We then contrasted it against the responses to identical stimuli used previously in humans. Despite a similar overall pattern, ranging from the processing of shorter time windows in core areas to longer time windows in lateral belt and parabelt areas, monkeys exhibited lower sensitivity to longer time windows than humans. This difference in neuronal sensitivity might be explained by a specialization of the human brain for processing longer time windows in speech.
Collapse
Affiliation(s)
- Pradeep Dheerendra
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK.,Institute of Neuroscience and Psychology, University of Glasgow, Glasgow G128QB, UK
| | - Simon Baumann
- National Institute of Mental Health, NIH, Bethesda, MD 20892-1148, USA.,Department of Psychology, University of Turin, Torino 10124, Italy
| | - Olivier Joly
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Fabien Balezeau
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | | | - Alexander Thiele
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| | - Timothy D Griffiths
- Biosciences Institute, Newcastle University, Newcastle upon Tyne, NE2 4HH, UK
| |
Collapse
|
14
|
Fuglsang SA, Madsen KH, Puonti O, Hjortkjær J, Siebner HR. Mapping cortico-subcortical sensitivity to 4 Hz amplitude modulation depth in human auditory system with functional MRI. Neuroimage 2021; 246:118745. [PMID: 34808364 DOI: 10.1016/j.neuroimage.2021.118745] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Revised: 11/17/2021] [Accepted: 11/18/2021] [Indexed: 10/19/2022] Open
Abstract
Temporal modulations in the envelope of acoustic waveforms at rates around 4 Hz constitute a strong acoustic cue in speech and other natural sounds. It is often assumed that the ascending auditory pathway is increasingly sensitive to slow amplitude modulation (AM), but sensitivity to AM is typically considered separately for individual stages of the auditory system. Here, we used blood oxygen level dependent (BOLD) fMRI in twenty human subjects (10 male) to measure sensitivity of regional neural activity in the auditory system to 4 Hz temporal modulations. Participants were exposed to AM noise stimuli varying parametrically in modulation depth to characterize modulation-depth effects on BOLD responses. A Bayesian hierarchical modeling approach was used to model potentially nonlinear relations between AM depth and group-level BOLD responses in auditory regions of interest (ROIs). Sound stimulation activated the auditory brainstem and cortex structures in single subjects. BOLD responses to noise exposure in core and belt auditory cortices scaled positively with modulation depth. This finding was corroborated by whole-brain cluster-level inference. Sensitivity to AM depth variations was particularly pronounced in the Heschl's gyrus but also found in higher-order auditory cortical regions. None of the sound-responsive subcortical auditory structures showed a BOLD response profile that reflected the parametric variation in AM depth. The results are compatible with the notion that early auditory cortical regions play a key role in processing low-rate modulation content of sounds in the human auditory system.
Collapse
Affiliation(s)
- Søren A Fuglsang
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark.
| | - Kristoffer H Madsen
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Oula Puonti
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Jens Hjortkjær
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Health Technology, Technical University of Denmark, Kgs. Lyngby, Denmark
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Amager and Hvidovre, Hvidovre Denmark; Department of Neurology, Copenhagen University Hospital Bispebjerg and Frederiksberg, Copenhagen, Denmark; Department of Clinical Medicine, Faculty of Medical and Health Sciences, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
15
|
Hamilton LS, Oganian Y, Hall J, Chang EF. Parallel and distributed encoding of speech across human auditory cortex. Cell 2021; 184:4626-4639.e13. [PMID: 34411517 PMCID: PMC8456481 DOI: 10.1016/j.cell.2021.07.019] [Citation(s) in RCA: 102] [Impact Index Per Article: 25.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2020] [Revised: 02/11/2021] [Accepted: 07/19/2021] [Indexed: 12/27/2022]
Abstract
Speech perception is thought to rely on a cortical feedforward serial transformation of acoustic into linguistic representations. Using intracranial recordings across the entire human auditory cortex, electrocortical stimulation, and surgical ablation, we show that cortical processing across areas is not consistent with a serial hierarchical organization. Instead, response latency and receptive field analyses demonstrate parallel and distinct information processing in the primary and nonprimary auditory cortices. This functional dissociation was also observed where stimulation of the primary auditory cortex evokes auditory hallucination but does not distort or interfere with speech perception. Opposite effects were observed during stimulation of nonprimary cortex in superior temporal gyrus. Ablation of the primary auditory cortex does not affect speech perception. These results establish a distributed functional organization of parallel information processing throughout the human auditory cortex and demonstrate an essential independent role for nonprimary auditory cortex in speech processing.
Collapse
Affiliation(s)
- Liberty S Hamilton
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Yulia Oganian
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Jeffery Hall
- Department of Neurology and Neurosurgery, McGill University Montreal Neurological Institute, Montreal, QC, H3A 2B4, Canada
| | - Edward F Chang
- Department of Neurological Surgery, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA.
| |
Collapse
|
16
|
Khalighinejad B, Patel P, Herrero JL, Bickel S, Mehta AD, Mesgarani N. Functional characterization of human Heschl's gyrus in response to natural speech. Neuroimage 2021; 235:118003. [PMID: 33789135 PMCID: PMC8608271 DOI: 10.1016/j.neuroimage.2021.118003] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 03/23/2021] [Accepted: 03/25/2021] [Indexed: 01/11/2023] Open
Abstract
Heschl's gyrus (HG) is a brain area that includes the primary auditory cortex in humans. Due to the limitations in obtaining direct neural measurements from this region during naturalistic speech listening, the functional organization and the role of HG in speech perception remain uncertain. Here, we used intracranial EEG to directly record neural activity in HG in eight neurosurgical patients as they listened to continuous speech stories. We studied the spatial distribution of acoustic tuning and the organization of linguistic feature encoding. We found a main gradient of change from posteromedial to anterolateral parts of HG. We also observed a decrease in frequency and temporal modulation tuning and an increase in phonemic representation, speaker normalization, speech sensitivity, and response latency. We did not observe a difference between the two brain hemispheres. These findings reveal a functional role for HG in processing and transforming simple to complex acoustic features and inform neurophysiological models of speech processing in the human auditory cortex.
Collapse
Affiliation(s)
- Bahar Khalighinejad
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Prachi Patel
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States
| | - Jose L. Herrero
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Stephan Bickel
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Ashesh D. Mehta
- Hofstra Northwell School of Medicine, Manhasset, NY, United States,The Feinstein Institutes for Medical Research, Manhasset, NY, United States
| | - Nima Mesgarani
- Mortimer B. Zuckerman Brain Behavior Institute, Columbia University, New York, NY, United States,Department of Electrical Engineering, Columbia University, New York, NY, United States,Corresponding author at: Department of Electrical Engineering, Columbia University, New York, NY, United States. (B. Khalighinejad), (P. Patel), (J.L. Herrero), (S. Bickel), (A.D. Mehta), (N. Mesgarani)
| |
Collapse
|
17
|
Mapping the human auditory cortex using spectrotemporal receptive fields generated with magnetoencephalography. Neuroimage 2021; 238:118222. [PMID: 34058330 DOI: 10.1016/j.neuroimage.2021.118222] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 05/25/2021] [Accepted: 05/28/2021] [Indexed: 11/24/2022] Open
Abstract
We present a novel method to map the functional organization of the human auditory cortex noninvasively using magnetoencephalography (MEG). More specifically, this method estimates via reverse correlation the spectrotemporal receptive fields (STRF) in response to a temporally dense pure tone stimulus, from which important spectrotemporal characteristics of neuronal processing can be extracted and mapped back onto the cortex surface. We show that several neuronal populations can be found examining the spectrotemporal characteristics of their STRFs, and demonstrate how these can be used to generate tonotopic gradient maps. In doing so, we show that the spatial resolution of MEG is sufficient to reliably extract important information about the spatial organization of the auditory cortex, while enabling the analysis of complex temporal dynamics of auditory processing such as best temporal modulation rate and response latency given its excellent temporal resolution. Furthermore, because spectrotemporally dense auditory stimuli can be used with MEG, the time required to acquire the necessary data to generate tonotopic maps is significantly less for MEG than for other neuroimaging tools that acquire BOLD-like signals.
Collapse
|
18
|
Nakai T, Koide-Majima N, Nishimoto S. Correspondence of categorical and feature-based representations of music in the human brain. Brain Behav 2021; 11:e01936. [PMID: 33164348 PMCID: PMC7821620 DOI: 10.1002/brb3.1936] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Revised: 09/24/2020] [Accepted: 10/21/2020] [Indexed: 01/11/2023] Open
Abstract
INTRODUCTION Humans tend to categorize auditory stimuli into discrete classes, such as animal species, language, musical instrument, and music genre. Of these, music genre is a frequently used dimension of human music preference and is determined based on the categorization of complex auditory stimuli. Neuroimaging studies have reported that the superior temporal gyrus (STG) is involved in response to general music-related features. However, there is considerable uncertainty over how discrete music categories are represented in the brain and which acoustic features are more suited for explaining such representations. METHODS We used a total of 540 music clips to examine comprehensive cortical representations and the functional organization of music genre categories. For this purpose, we applied a voxel-wise modeling approach to music-evoked brain activity measured using functional magnetic resonance imaging. In addition, we introduced a novel technique for feature-brain similarity analysis and assessed how discrete music categories are represented based on the cortical response pattern to acoustic features. RESULTS Our findings indicated distinct cortical organizations for different music genres in the bilateral STG, and they revealed representational relationships between different music genres. On comparing different acoustic feature models, we found that these representations of music genres could be explained largely by a biologically plausible spectro-temporal modulation-transfer function model. CONCLUSION Our findings have elucidated the quantitative representation of music genres in the human cortex, indicating the possibility of modeling this categorization of complex auditory stimuli based on brain activity.
Collapse
Affiliation(s)
- Tomoya Nakai
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Graduate School of Frontier Biosciences, Osaka University, Suita, Japan
| | - Naoko Koide-Majima
- Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.,AI Science Research and Development Promotion Center, National Institute of Information and Communications Technology, Suita, Japan
| | - Shinji Nishimoto
- Center for Information and Neural Networks, National Institute of Information and Communications Technology, Suita, Japan.,Graduate School of Frontier Biosciences, Osaka University, Suita, Japan.,Graduate School of Medicine, Osaka University, Suita, Japan
| |
Collapse
|
19
|
Cortical voice processing is grounded in elementary sound analyses for vocalization relevant sound patterns. Prog Neurobiol 2020; 200:101982. [PMID: 33338555 DOI: 10.1016/j.pneurobio.2020.101982] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2020] [Revised: 12/05/2020] [Accepted: 12/11/2020] [Indexed: 01/31/2023]
Abstract
A subregion of the auditory cortex (AC) was proposed to selectively process voices. This selectivity of the temporal voice area (TVA) and its role in processing non-voice sounds however have remained elusive. For a better functional description of the TVA, we investigated its neural responses both to voice and non-voice sounds, and critically also to textural sound patterns (TSPs) that share basic features with natural sounds but that are perceptually very distant from voices. Listening to these TSPs, first, elicited activity in large subregions of the TVA, which was mainly driven by perpetual ratings of TSPs along a voice similarity scale. This similar TVA activity in response to TSPs might partially explain activation patterns typically observed during voice processing. Second, we reconstructed the TVA activity that is usually observed in voice processing with a linear combination of activation patterns from TSPs. An analysis of the reconstruction model weights demonstrated that the TVA similarly processes both natural voice and non-voice sounds as well as TSPs along their acoustic and perceptual features. The predominant factor in reconstructing the TVA pattern by TSPs were the perceptual voice similarity ratings. Third, a multi-voxel pattern analysis confirms that the TSPs contain sufficient sound information to explain TVA activity for voice processing. Altogether, rather than being restricted to higher-order voice processing only, the human "voice area" uses mechanisms to evaluate the perceptual and acoustic quality of non-voice sounds, and responds to the latter with a "voice-like" processing pattern when detecting some rudimentary perceptual similarity with voices.
Collapse
|
20
|
Nonverbal auditory communication - Evidence for integrated neural systems for voice signal production and perception. Prog Neurobiol 2020; 199:101948. [PMID: 33189782 DOI: 10.1016/j.pneurobio.2020.101948] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2020] [Revised: 10/12/2020] [Accepted: 11/04/2020] [Indexed: 12/24/2022]
Abstract
While humans have developed a sophisticated and unique system of verbal auditory communication, they also share a more common and evolutionarily important nonverbal channel of voice signaling with many other mammalian and vertebrate species. This nonverbal communication is mediated and modulated by the acoustic properties of a voice signal, and is a powerful - yet often neglected - means of sending and perceiving socially relevant information. From the viewpoint of dyadic (involving a sender and a signal receiver) voice signal communication, we discuss the integrated neural dynamics in primate nonverbal voice signal production and perception. Most previous neurobiological models of voice communication modelled these neural dynamics from the limited perspective of either voice production or perception, largely disregarding the neural and cognitive commonalities of both functions. Taking a dyadic perspective on nonverbal communication, however, it turns out that the neural systems for voice production and perception are surprisingly similar. Based on the interdependence of both production and perception functions in communication, we first propose a re-grouping of the neural mechanisms of communication into auditory, limbic, and paramotor systems, with special consideration for a subsidiary basal-ganglia-centered system. Second, we propose that the similarity in the neural systems involved in voice signal production and perception is the result of the co-evolution of nonverbal voice production and perception systems promoted by their strong interdependence in dyadic interactions.
Collapse
|
21
|
Sohoglu E, Kumar S, Chait M, Griffiths TD. Multivoxel codes for representing and integrating acoustic features in human cortex. Neuroimage 2020; 217:116661. [PMID: 32081785 PMCID: PMC7339141 DOI: 10.1016/j.neuroimage.2020.116661] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2019] [Revised: 02/13/2020] [Accepted: 02/15/2020] [Indexed: 10/25/2022] Open
Abstract
Using fMRI and multivariate pattern analysis, we determined whether spectral and temporal acoustic features are represented by independent or integrated multivoxel codes in human cortex. Listeners heard band-pass noise varying in frequency (spectral) and amplitude-modulation (AM) rate (temporal) features. In the superior temporal plane, changes in multivoxel activity due to frequency were largely invariant with respect to AM rate (and vice versa), consistent with an independent representation. In contrast, in posterior parietal cortex, multivoxel representation was exclusively integrated and tuned to specific conjunctions of frequency and AM features (albeit weakly). Direct between-region comparisons show that whereas independent coding of frequency weakened with increasing levels of the hierarchy, such a progression for AM and integrated coding was less fine-grained and only evident in the higher hierarchical levels from non-core to parietal cortex (with AM coding weakening and integrated coding strengthening). Our findings support the notion that primary auditory cortex can represent spectral and temporal acoustic features in an independent fashion and suggest a role for parietal cortex in feature integration and the structuring of sensory input.
Collapse
Affiliation(s)
- Ediz Sohoglu
- School of Psychology, University of Sussex, Brighton, BN1 9QH, United Kingdom.
| | - Sukhbinder Kumar
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
| | - Timothy D Griffiths
- Institute of Neurobiology, Medical School, Newcastle University, Newcastle Upon Tyne, NE2 4HH, United Kingdom; Wellcome Trust Centre for Human Neuroimaging, University College London, London, WC1N 3BG, United Kingdom
| |
Collapse
|
22
|
Zhu J, Cui J, Cao G, Ji J, Chang X, Zhang C, Liu Y. Brain Functional Alterations in Long-term Unilateral Hearing Impairment. Acad Radiol 2020; 27:1085-1092. [PMID: 31677903 DOI: 10.1016/j.acra.2019.09.027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2019] [Revised: 09/13/2019] [Accepted: 09/18/2019] [Indexed: 11/18/2022]
Abstract
BACKGROUND The rate of patients with unilateral hearing impairments (UHI) increase with age and are characterized by asymmetric auditory afferents in which auditory information is asymmetrically transmitted to the brain. Long-term bilateral hearing imbalance can cause abnormal functional changes in the cerebral cortex. However, the relationship between functional alterations in the brain and the severity of the hearing impairment remains unclear. METHODS This study included 33 patients with UHI (left-sided impairment in 17 and right-sided impairment in 16) and 32 healthy patients. All participants underwent resting-state, blood oxygen level dependent functional magnetic resonance imaging. Fractional amplitude of low frequency fluctuation (fALFF) values were calculated after data preprocessing and compared among the left-sided and right-sided impairment groups and the control group. Pure tone audiometry was used to evaluate patients' hearing impairment level. The correlation between fALFF values of abnormal brain regions and the duration and severity of hearing impairment was analyzed. RESULTS Results provide evidence for altered resting-state functional activities in the brain of patients with left or right long-term UHI, with significantly increased fALFF values in the Heschl's gyrus, superior temporal gyrus, and insula were observed. Moreover, complicated networks reorganization involved in the visual, cognitive, sensorimotor and information transmission functions except for the auditory function and some brain regions exhibited functional changes only in the one-sided impairment group. In addition, the severity of hearing impairment is related with the functional activities in the bilateral Heschl's gyrus, bilateral insula, right superior temporal gyrus, and left middle frontal gyrus. CONCLUSION In conclusion, alterations in functional activity are observed in the brains of patients with long-term hearing impairments and multiple brain regions within different functional networks are involved in the brain functional remodeling. The brain reintegration mechanism appears to be asymmetrical and the lateralization pattern in the contralateral brain hemisphere for auditory information processing related with the severity of hearing impairment.
Collapse
Affiliation(s)
- Jianping Zhu
- Department of Imaging, Heping Hospital affiliated to Changzhi Medical College, Changzhi, PR China
| | - Jiangbo Cui
- Department of Imaging, Hepji Hospital affiliated to Changzhi Medical College, Changzhi, PR China
| | - Gang Cao
- Department of radiology, Peking University Lu'an Hospital, Changzhi, PR China
| | - Jianwu Ji
- Department of Imaging, Heping Hospital affiliated to Changzhi Medical College, Changzhi, PR China
| | - Xu Chang
- Graduate School of Changzhi Medical College, Changzhi, PR China
| | - Chongjie Zhang
- Department of Imaging, Yuncheng Central Hospital, Yuncheng, PR China
| | - Yongbo Liu
- Department of radiology, Peking University Lu'an Hospital, Changzhi, PR China.
| |
Collapse
|
23
|
Responses to Visual Speech in Human Posterior Superior Temporal Gyrus Examined with iEEG Deconvolution. J Neurosci 2020; 40:6938-6948. [PMID: 32727820 PMCID: PMC7470920 DOI: 10.1523/jneurosci.0279-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/01/2020] [Accepted: 06/02/2020] [Indexed: 12/22/2022] Open
Abstract
Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.
Collapse
|
24
|
Erb J, Schmitt LM, Obleser J. Temporal selectivity declines in the aging human auditory cortex. eLife 2020; 9:55300. [PMID: 32618270 PMCID: PMC7410487 DOI: 10.7554/elife.55300] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 07/02/2020] [Indexed: 12/03/2022] Open
Abstract
Current models successfully describe the auditory cortical response to natural sounds with a set of spectro-temporal features. However, these models have hardly been linked to the ill-understood neurobiological changes that occur in the aging auditory cortex. Modelling the hemodynamic response to a rich natural sound mixture in N = 64 listeners of varying age, we here show that in older listeners’ auditory cortex, the key feature of temporal rate is represented with a markedly broader tuning. This loss of temporal selectivity is most prominent in primary auditory cortex and planum temporale, with no such changes in adjacent auditory or other brain areas. Amongst older listeners, we observe a direct relationship between chronological age and temporal-rate tuning, unconfounded by auditory acuity or model goodness of fit. In line with senescent neural dedifferentiation more generally, our results highlight decreased selectivity to temporal information as a hallmark of the aging auditory cortex. It can often be difficult for an older person to understand what someone is saying, particularly in noisy environments. Exactly how and why this age-related change occurs is not clear, but it is thought that older individuals may become less able to tune in to certain features of sound. Newer tools are making it easier to study age-related changes in hearing in the brain. For example, functional magnetic resonance imaging (fMRI) can allow scientists to ‘see’ and measure how certain parts of the brain react to different features of sound. Using fMRI data, researchers can compare how younger and older people process speech. They can also track how speech processing in the brain changes with age. Now, Erb et al. show that older individuals have a harder time tuning into the rhythm of speech. In the experiments, 64 people between the ages of 18 to 78 were asked to listen to speech in a noisy setting while they underwent fMRI. The researchers then tested a computer model using the data. In the older individuals, the brain’s tuning to the timing or rhythm of speech was broader, while the younger participants were more able to finely tune into this feature of sound. The older a person was the less able their brain was to distinguish rhythms in speech, likely making it harder to understand what had been said. This hearing change likely occurs because brain cells become less specialised overtime, which can contribute to many kinds of age-related cognitive decline. This new information about why understanding speech becomes more difficult with age may help scientists develop better hearing aids that are individualised to a person’s specific needs.
Collapse
Affiliation(s)
- Julia Erb
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany
| |
Collapse
|
25
|
Moradi V, Kheirkhah K, Farahani S, Kavianpour I. Investigating the Effects of Hearing Loss and Hearing Aid Digital Delay on Sound-Induced Flash Illusion. J Audiol Otol 2020; 24:174-179. [PMID: 32575953 PMCID: PMC7575923 DOI: 10.7874/jao.2019.00507] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2019] [Accepted: 04/28/2020] [Indexed: 11/29/2022] Open
Abstract
Background and Objectives The integration of auditory-visual speech information improves speech perception; however, if the auditory system input is disrupted due to hearing loss, auditory and visual inputs cannot be fully integrated. Additionally, temporal coincidence of auditory and visual input is a significantly important factor in integrating the input of these two senses. Time delayed acoustic pathway caused by the signal passing through digital signal processing. Therefore, this study aimed to investigate the effects of hearing loss and hearing aid digital delay circuit on sound-induced flash illusion. Subjects and Methods A total of 13 adults with normal hearing, 13 with mild to moderate hearing loss, and 13 with moderate to severe hearing loss were enrolled in this study. Subsequently, the sound-induced flash illusion test was conducted, and the results were analyzed. Results The results showed that hearing aid digital delay and hearing loss had no detrimental effect on sound-induced flash illusion. Conclusions Transmission velocity and neural transduction rate of the auditory inputs decreased in patients with hearing loss. Hence, the integrating auditory and visual sensory cannot be combined completely. Although the transmission rate of the auditory sense input was approximately normal when the hearing aid was prescribed. Thus, it can be concluded that the processing delay in the hearing aid circuit is insufficient to disrupt the integration of auditory and visual information.
Collapse
Affiliation(s)
- Vahid Moradi
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Kiana Kheirkhah
- Department of Biomedical Engineering, School of Electrical and Computer, Islamic Azad University, Tehran, Iran
| | - Saeid Farahani
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Iman Kavianpour
- Department of Telecommunication, School of Engineering Boushehr Branch, Islamic Azad University, Boushehr, Iran
| |
Collapse
|
26
|
Besle J, Mougin O, Sánchez-Panchuelo RM, Lanting C, Gowland P, Bowtell R, Francis S, Krumbholz K. Is Human Auditory Cortex Organization Compatible With the Monkey Model? Contrary Evidence From Ultra-High-Field Functional and Structural MRI. Cereb Cortex 2020; 29:410-428. [PMID: 30357410 PMCID: PMC6294415 DOI: 10.1093/cercor/bhy267] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2017] [Indexed: 11/14/2022] Open
Abstract
It is commonly assumed that the human auditory cortex is organized similarly to that of macaque monkeys, where the primary region, or "core," is elongated parallel to the tonotopic axis (main direction of tonotopic gradients), and subdivided across this axis into up to 3 distinct areas (A1, R, and RT), with separate, mirror-symmetric tonotopic gradients. This assumption, however, has not been tested until now. Here, we used high-resolution ultra-high-field (7 T) magnetic resonance imaging (MRI) to delineate the human core and map tonotopy in 24 individual hemispheres. In each hemisphere, we assessed tonotopic gradients using principled, quantitative analysis methods, and delineated the core using 2 independent (functional and structural) MRI criteria. Our results indicate that, contrary to macaques, the human core is elongated perpendicular rather than parallel to the main tonotopic axis, and that this axis contains no more than 2 mirror-reversed gradients within the core region. Previously suggested homologies between these gradients and areas A1 and R in macaques were not supported. Our findings suggest fundamental differences in auditory cortex organization between humans and macaques.
Collapse
Affiliation(s)
- Julien Besle
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Psychology, American University of Beirut, Riad El-Solh, Beirut, Lebanon
| | - Olivier Mougin
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Rosa-María Sánchez-Panchuelo
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Cornelis Lanting
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK.,Department of Otorhinolaryngology, Radboud University Medical Center, University of Nijmegen, Nijmegen, Netherlands
| | - Penny Gowland
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Richard Bowtell
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Susan Francis
- Sir Peter Mansfield Imaging Centre, School of Physics and Astronomy, University of Nottingham, University Park, Nottingham, UK
| | - Katrin Krumbholz
- Medical Research Council Institute of Hearing Research, School of Medicine, University of Nottingham, University Park, Nottingham, UK
| |
Collapse
|
27
|
Scarpa A, Cassandro C, Vitale C, Ralli M, Policastro A, Barone P, Cassandro E, Pellecchia MT. A comparison of auditory and vestibular dysfunction in Parkinson's disease and Multiple System Atrophy. Parkinsonism Relat Disord 2020; 71:51-57. [PMID: 32032926 DOI: 10.1016/j.parkreldis.2020.01.018] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/17/2019] [Revised: 01/27/2020] [Accepted: 01/28/2020] [Indexed: 01/24/2023]
Abstract
INTRODUCTION Vertigo and disequilibrium are common symptoms in idiopathic Parkinson's disease (PD) and in Multiple System Atrophy (MSA). Hearing loss has been recently recognized as an additional non-motor feature in PD. The aim of this study is to evaluate audio-vestibular function in patients affected by PD and MSA. METHODS Fifteen patients with PD, 16 patients with MSA and 20 age-matched healthy controls (HC) were enrolled. Audio-vestibular examination included pure-tone audiometry (PTA), vestibular bed-side examination, video Head Impulse Test (vHIT), and cervical Vestibular-Evoked Myogenic Potentials (cVEMPs). RESULTS PD and MSA patients showed worse PTA thresholds compared to HC at high frequencies. MSA patients showed worse PTA thresholds at 125 Hz compared to HC. In patients with PD, a direct correlation between disease duration and PTA thresholds was found at 2000 Hz and 4000 Hz. In patients with MSA, disease duration was directly related to PTA thresholds at 125 Hz and 250 Hz. Among PD patients, cVEMPs were absent bilaterally in 46.7% and unilaterally in 13.3% of the subjects. Among MSA patients, cVEMPs were absent bilaterally in 26.7% and unilaterally in 40% of the subjects; p13 latency was significantly increased in PD patients as compared to HC. A significant inverse relationship was found between disease duration and cVEMP amplitude in MSA patients. CONCLUSION We found that high-frequency hearing loss and cVEMP abnormalities are frequent features of both MSA and PD, suggesting that an audio-vestibular dysfunction may be present in these patients even in the absence of self-reported auditory or vestibular symptoms.
Collapse
Affiliation(s)
- Alfonso Scarpa
- Department of Medicine and Surgery, University of Salerno, Salerno, Italy
| | | | - Carmine Vitale
- Department of Motor Sciences and Wellness, University Parthenope, Naples, Italy
| | - Massimo Ralli
- Department of Sense Organs, Sapienza University Rome, Rome, Italy
| | | | - Paolo Barone
- Neuroscience Section, Department of Medicine and Surgery, University of Salerno, Italy
| | - Ettore Cassandro
- Department of Medicine and Surgery, University of Salerno, Salerno, Italy
| | | |
Collapse
|
28
|
Joint Representation of Spatial and Phonetic Features in the Human Core Auditory Cortex. Cell Rep 2020; 24:2051-2062.e2. [PMID: 30134167 DOI: 10.1016/j.celrep.2018.07.076] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2017] [Revised: 04/09/2018] [Accepted: 07/22/2018] [Indexed: 12/12/2022] Open
Abstract
The human auditory cortex simultaneously processes speech and determines the location of a speaker in space. Neuroimaging studies in humans have implicated core auditory areas in processing the spectrotemporal and the spatial content of sound; however, how these features are represented together is unclear. We recorded directly from human subjects implanted bilaterally with depth electrodes in core auditory areas as they listened to speech from different directions. We found local and joint selectivity to spatial and spectrotemporal speech features, where the spatial and spectrotemporal features are organized independently of each other. This representation enables successful decoding of both spatial and phonetic information. Furthermore, we found that the location of the speaker does not change the spectrotemporal tuning of the electrodes but, rather, modulates their mean response level. Our findings contribute to defining the functional organization of responses in the human auditory cortex, with implications for more accurate neurophysiological models of speech processing.
Collapse
|
29
|
Staib M, Abivardi A, Bach DR. Primary auditory cortex representation of fear-conditioned musical sounds. Hum Brain Mapp 2019; 41:882-891. [PMID: 31663229 PMCID: PMC7268068 DOI: 10.1002/hbm.24846] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 09/09/2019] [Accepted: 10/09/2019] [Indexed: 01/04/2023] Open
Abstract
Auditory cortex is required for discriminative fear conditioning beyond the classical amygdala microcircuit, but its precise role is unknown. It has previously been suggested that Heschl's gyrus, which includes primary auditory cortex (A1), but also other auditory areas, encodes threat predictions during presentation of conditioned stimuli (CS) consisting of monophones, or frequency sweeps. The latter resemble natural prosody and contain discriminative spectro‐temporal information. Here, we use functional magnetic resonance imaging (fMRI) in humans to address CS encoding in A1 for stimuli that contain only spectral but no temporal discriminative information. Two musical chords (complex) or two monophone tones (simple) were presented in a signaled reinforcement context (reinforced CS+ and nonreinforced CS−), or in a different context without reinforcement (neutral sounds, NS1 and NS2), with an incidental sound detection task. CS/US association encoding was quantified by the increased discriminability of BOLD patterns evoked by CS+/CS−, compared to NS pairs with similar physical stimulus differences and task demands. A1 was defined on a single‐participant level and based on individual anatomy. We find that in A1, discriminability of CS+/CS− was higher than for NS1/NS2. This representation of unconditioned stimulus (US) prediction was of comparable magnitude for both types of sounds. We did not observe such encoding outside A1. Different from frequency sweeps investigated previously, musical chords did not share representations of US prediction with monophone sounds. To summarize, our findings suggest decodable representation of US predictions in A1, for various types of CS, including musical chords that contain no temporal discriminative information.
Collapse
Affiliation(s)
- Matthias Staib
- Computational Psychiatry Research, Department of Psychiatry, Psychotherapy, and Psychosomatics, Psychiatric Hospital, 8032 University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, 8057 University of Zurich, Zurich, Switzerland
| | - Aslan Abivardi
- Computational Psychiatry Research, Department of Psychiatry, Psychotherapy, and Psychosomatics, Psychiatric Hospital, 8032 University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, 8057 University of Zurich, Zurich, Switzerland
| | - Dominik R Bach
- Computational Psychiatry Research, Department of Psychiatry, Psychotherapy, and Psychosomatics, Psychiatric Hospital, 8032 University of Zurich, Zurich, Switzerland.,Neuroscience Center Zurich, 8057 University of Zurich, Zurich, Switzerland.,Wellcome Centre for Human Neuroimaging, University College London, London, UK
| |
Collapse
|
30
|
Doucet GE, Luber MJ, Balchandani P, Sommer IE, Frangou S. Abnormal auditory tonotopy in patients with schizophrenia. NPJ SCHIZOPHRENIA 2019; 5:16. [PMID: 31578332 PMCID: PMC6775081 DOI: 10.1038/s41537-019-0084-x] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/08/2019] [Accepted: 08/28/2019] [Indexed: 12/19/2022]
Abstract
Auditory hallucinations are among the most prevalent and most distressing symptoms of schizophrenia. Despite significant progress, it is still unclear whether auditory hallucinations arise from abnormalities in primary sensory processing or whether they represent failures of higher-order functions. To address this knowledge gap, we capitalized on the increased spatial resolution afforded by ultra-high field imaging at 7 Tesla to investigate the tonotopic organization of the auditory cortex in patients with schizophrenia with a history of recurrent hallucinations. Tonotopy is a fundamental feature of the functional organization of the auditory cortex that is established very early in development and predates the onset of symptoms by decades. Compared to healthy participants, patients showed abnormally increased activation and altered tonotopic organization of the auditory cortex during a purely perceptual task, which involved passive listening to tones across a range of frequencies (88–8000 Hz). These findings suggest that the predisposition to auditory hallucinations is likely to be predicated on abnormalities in the functional organization of the auditory cortex and which may serve as a biomarker for the early identification of vulnerable individuals.
Collapse
Affiliation(s)
- Gaelle E Doucet
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Maxwell J Luber
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Priti Balchandani
- Translational and Molecular Imaging Institute, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA
| | - Iris E Sommer
- University Medical Center Groningen, 9713AW, Groningen, Netherlands
| | - Sophia Frangou
- Department of Psychiatry, Icahn School of Medicine at Mount Sinai, New York, NY, 10029, USA.
| |
Collapse
|
31
|
Karas PJ, Magnotti JF, Metzger BA, Zhu LL, Smith KB, Yoshor D, Beauchamp MS. The visual speech head start improves perception and reduces superior temporal cortex responses to auditory speech. eLife 2019; 8:e48116. [PMID: 31393261 PMCID: PMC6687434 DOI: 10.7554/elife.48116] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2019] [Accepted: 07/17/2019] [Indexed: 12/30/2022] Open
Abstract
Visual information about speech content from the talker's mouth is often available before auditory information from the talker's voice. Here we examined perceptual and neural responses to words with and without this visual head start. For both types of words, perception was enhanced by viewing the talker's face, but the enhancement was significantly greater for words with a head start. Neural responses were measured from electrodes implanted over auditory association cortex in the posterior superior temporal gyrus (pSTG) of epileptic patients. The presence of visual speech suppressed responses to auditory speech, more so for words with a visual head start. We suggest that the head start inhibits representations of incompatible auditory phonemes, increasing perceptual accuracy and decreasing total neural responses. Together with previous work showing visual cortex modulation (Ozker et al., 2018b) these results from pSTG demonstrate that multisensory interactions are a powerful modulator of activity throughout the speech perception network.
Collapse
Affiliation(s)
- Patrick J Karas
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - John F Magnotti
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Brian A Metzger
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Lin L Zhu
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Kristen B Smith
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | - Daniel Yoshor
- Department of NeurosurgeryBaylor College of MedicineHoustonUnited States
| | | |
Collapse
|
32
|
Training Humans to Categorize Monkey Calls: Auditory Feature- and Category-Selective Neural Tuning Changes. Neuron 2019; 98:405-416.e4. [PMID: 29673483 DOI: 10.1016/j.neuron.2018.03.014] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2017] [Revised: 01/18/2018] [Accepted: 03/08/2018] [Indexed: 11/23/2022]
Abstract
Grouping auditory stimuli into common categories is essential for a variety of auditory tasks, including speech recognition. We trained human participants to categorize auditory stimuli from a large novel set of morphed monkey vocalizations. Using fMRI-rapid adaptation (fMRI-RA) and multi-voxel pattern analysis (MVPA) techniques, we gained evidence that categorization training results in two distinct sets of changes: sharpened tuning to monkey call features (without explicit category representation) in left auditory cortex and category selectivity for different types of calls in lateral prefrontal cortex. In addition, the sharpness of neural selectivity in left auditory cortex, as estimated with both fMRI-RA and MVPA, predicted the steepness of the categorical boundary, whereas categorical judgment correlated with release from adaptation in the left inferior frontal gyrus. These results support the theory that auditory category learning follows a two-stage model analogous to the visual domain, suggesting general principles of perceptual category learning in the human brain.
Collapse
|
33
|
Li Q, Liu G, Yuan G, Wang G, Wu Z, Zhao X. DC Shifts-fMRI: A Supplement to Event-Related fMRI. Front Comput Neurosci 2019; 13:37. [PMID: 31244636 PMCID: PMC6581730 DOI: 10.3389/fncom.2019.00037] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2019] [Accepted: 05/21/2019] [Indexed: 11/13/2022] Open
Abstract
Event-related fMRI have been widely used in locating brain regions which respond to specific tasks. However, activities of brain regions which modulate or indirectly participate in the response to a specific task are not event-related. Event-related fMRI can't locate these regulatory regions, detrimental to the integrity of the result that event-related fMRI revealed. Direct-current EEG shifts (DC shifts) have been found linked to the inner brain activity, a fusion DC shifts-fMRI method may have the ability to reveal a more complete response of the brain. In this study, we used DC shifts-fMRI to verify that even when responding to a very simple task, (1) The response of the brain is more complicated than event-related fMRI generally revealed and (2) DC shifts-fMRI have the ability of revealing brain regions whose responses are not in event-related way. We used a classical and simple paradigm which is often used in auditory cortex tonotopic mapping. Data were recorded from 50 subjects (25 male, 25 female) who were presented with randomly presented pure tone sequences with six different frequencies (200, 400, 800, 1,600, 3,200, 6,400 Hz). Our traditional fMRI results are consistent with previous findings that the activations are concentrated on the auditory cortex. Our DC shifts-fMRI results showed that the cingulate-caudate-thalamus network which underpins sustained attention is positively activated while the dorsal attention network and the right middle frontal gyrus which underpin attention orientation are negatively activated. The regional-specific correlations between DC shifts and brain networks indicate the complexity of the response of the brain even to a simple task and that the DC shifts can effectively reflect these non-event-related inner brain activities.
Collapse
Affiliation(s)
- Qiang Li
- Education Science College, Guizhou Normal College, Guiyang, China
| | - Guangyuan Liu
- College of Electronic and Information Engineering, Southwest University, Chongqing, China.,Chongqing Collaborative Innovation Center for Brain Science, Southwest University, Chongqing, China
| | - Guangjie Yuan
- College of Electronic and Information Engineering, Southwest University, Chongqing, China
| | - Gaoyuan Wang
- College of Music, Southwest University, Chongqing, China
| | - Zonghui Wu
- Southwest University Hospital, Southwest University, Chongqing, China
| | - Xingcong Zhao
- College of Electronic and Information Engineering, Southwest University, Chongqing, China
| |
Collapse
|
34
|
Delogu F, McMurray P. Where did that noise come from? Memory for sound locations is exceedingly eccentric both in front and in rear space. Cogn Process 2019; 20:479-494. [PMID: 31197624 DOI: 10.1007/s10339-019-00922-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 06/03/2019] [Indexed: 10/26/2022]
Abstract
Few studies have examined the stability of the representation of the position of sound sources in spatial working memory. The goal of this study was to verify whether the memory of sound position declines as maintenance time increases. In two experiments, we tested the influence of the delay between stimulus and response in a sound localization task. In Experiment 1, blindfolded participants listened to bursts of white noise originating from 16 loudspeakers equally spaced in a 360-degree circular space around the listener in such a way that the nose was aligned to the zero-degree coordinate. Their task was to indicate sounds' position using a digital pointer when prompted at varying delays: 0, 3, and 6 s after stimulus offset. In Experiment 2, the task was analogous to Exp. 1 with stimulus-response delays of 0 or 10 s. Results of the two experiments show that increasing stimulus-response delays up to 10 s do not impair sound localization. Participants systematically overestimated the eccentricity of the auditory stimulus by shifting their responses either toward the 90-degree coordinate, in alignment with the right ear, or toward the 270-degree coordinate, in alignment with the left ear. Such bias was analogous in the front and in the rear azimuthal space and was only marginally influenced by the delay conditions. We conclude that the representation of auditory space in working memory is stable, but directionally biased with systematic overestimation of eccentricity.
Collapse
Affiliation(s)
- Franco Delogu
- Lawrence Technological University, Southfield, MI, USA.
| | | |
Collapse
|
35
|
Hajizadeh A, Matysiak A, May PJC, König R. Explaining event-related fields by a mechanistic model encapsulating the anatomical structure of auditory cortex. BIOLOGICAL CYBERNETICS 2019; 113:321-345. [PMID: 30820663 PMCID: PMC6510841 DOI: 10.1007/s00422-019-00795-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 02/08/2019] [Indexed: 06/09/2023]
Abstract
Event-related fields of the magnetoencephalogram are triggered by sensory stimuli and appear as a series of waves extending hundreds of milliseconds after stimulus onset. They reflect the processing of the stimulus in cortex and have a highly subject-specific morphology. However, we still have an incomplete picture of how event-related fields are generated, what the various waves signify, and why they are so subject-specific. Here, we focus on this problem through the lens of a computational model which describes auditory cortex in terms of interconnected cortical columns as part of hierarchically placed fields of the core, belt, and parabelt areas. We develop an analytical approach arriving at solutions to the system dynamics in terms of normal modes: damped harmonic oscillators emerging out of the coupled excitation and inhibition in the system. Each normal mode is a global feature which depends on the anatomical structure of the entire auditory cortex. Further, normal modes are fundamental dynamical building blocks, in that the activity of each cortical column represents a combination of all normal modes. This approach allows us to replicate a typical auditory event-related response as a weighted sum of the single-column activities. Our work offers an alternative to the view that the event-related field arises out of spatially discrete, local generators. Rather, there is only a single generator process distributed over the entire network of the auditory cortex. We present predictions for testing to what degree subject-specificity is due to cross-subject variations in dynamical parameters rather than in the cortical surface morphology.
Collapse
Affiliation(s)
- Aida Hajizadeh
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Artur Matysiak
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Patrick J. C. May
- Department of Psychology, Lancaster University, Lancaster, LA1 4YF UK
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| | - Reinhard König
- Special Lab Non-invasive Brain Imaging, Leibniz Institute for Neurobiology, Brenneckestraße 6, 39118 Magdeburg, Germany
| |
Collapse
|
36
|
Xie X, Liu Y, Han X, Liu P, Qiu H, Li J, Yu H. Differences in Intrinsic Brain Abnormalities Between Patients With Left- and Right-Sided Long-Term Hearing Impairment. Front Neurosci 2019; 13:206. [PMID: 30914917 PMCID: PMC6422939 DOI: 10.3389/fnins.2019.00206] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2018] [Accepted: 02/22/2019] [Indexed: 01/06/2023] Open
Abstract
Unilateral hearing impairment is characterized by asymmetric hearing input, which causes bilateral unbalanced auditory afferents and tinnitus of varying degrees. Long-term hearing imbalance can cause functional reorganization in the brain. However, differences between intrinsic functional changes in the brains of patients with left- and those with right-sided long-term hearing impairments are incompletely understood. This study included 67 patients with unilateral hearing impairments (left-sided, 33 patients; right-sided, 34 patients) and 32 healthy controls. All study participants underwent blood oxygenation level dependent resting-state functional magnetic resonance imaging and T1-weighted imaging with three-dimensional fast spoiled gradient-echo sequences. After data preprocessing, fractional amplitude of low frequency (fALFF) and functional connectivity (FC) analyses were used to evaluate differences between patients and healthy controls. When compared with the right-sided hearing impairment group, the left-sided hearing impairment group showed significantly higher fALFF values in the left superior parietal gyrus, right inferior parietal lobule, and right superior frontal gyrus, whereas it showed significantly lower fALFF values in the left Heschl’s gyrus, right supramarginal gyrus, and left superior frontal gyrus. In the left-sided hearing impairment group, paired brain regions with enhanced FC were the left Heschl’s gyrus and right supramarginal gyrus, left Heschl’s gyrus and left superior parietal gyrus, left superior parietal gyrus and right inferior parietal lobule, right inferior parietal lobule and right superior frontal gyrus, and left and right superior frontal gyri. In the left-sided hearing impairment group, the FC of the paired brain regions correlated negatively with the duration and pure tone audiometry were in the left Heschl’s gyrus and right supramarginal gyrus. In the right-sided hearing impairment group, the FC of the paired brain regions correlated negatively with the duration was in the left Heschl’s gyrus and superior parietal gyrus, and with pure tone audiometry was right inferior parietal lobule and superior frontal gyrus. The intrinsic reintegration mechanisms of the brain appeared to differ between patients with left-sided hearing impairment and those with right-sided hearing impairment, and the severity of hearing impairment was associated with differences in functional integration in certain brain regions.
Collapse
Affiliation(s)
- Xiaoxiao Xie
- Department of Radiology, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, China
| | - Yongbo Liu
- Department of Radiology, Shanxi Lu'an General Hospital, Changzhi, China
| | - Xiaowei Han
- Graduate School, Chinese Academy of Medical Sciences, Peking Union Medical College, Beijing, China.,Department of Radiology, Heping Hospital of Changzhi Medical College, Changzhi, China
| | - Pei Liu
- Graduate School, Beijing University of Chinese Medicine, Beijing, China
| | - Hui Qiu
- Graduate School, Changzhi Medical College, Changzhi, China
| | - Junfeng Li
- Department of Radiology, Heping Hospital of Changzhi Medical College, Changzhi, China
| | - Huachen Yu
- Department of Orthopedics, The Second Affiliated Hospital and Yuying Children's Hospital of Wenzhou Medical University, Wenzhou, China
| |
Collapse
|
37
|
Statistical model-based approaches for functional connectivity analysis of neuroimaging data. Curr Opin Neurobiol 2019; 55:48-54. [PMID: 30739880 DOI: 10.1016/j.conb.2019.01.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2018] [Revised: 01/06/2019] [Accepted: 01/13/2019] [Indexed: 11/21/2022]
Abstract
We present recent literature on model-based approaches to estimating functional connectivity from neuroimaging data. In contrast to the typical focus on a particular scientific question, we reframe a wider literature in terms of the underlying statistical model used. We distinguish between directed versus undirected and static versus time-varying connectivity. There are numerous advantages to a model-based approach, including easily specified inductive bias, handling limited data scenarios, and building complex models from simpler building blocks.
Collapse
|
38
|
Larger Auditory Cortical Area and Broader Frequency Tuning Underlie Absolute Pitch. J Neurosci 2019; 39:2930-2937. [PMID: 30745420 DOI: 10.1523/jneurosci.1532-18.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Revised: 01/08/2019] [Accepted: 01/12/2019] [Indexed: 12/29/2022] Open
Abstract
Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms underlying this ability have not been clear. We presented a series of logarithmic frequency sweeps to age- and sex-matched groups of musicians with or without AP and controls without musical training. We used fMRI and population receptive field (pRF) modeling to measure the responses in the auditory cortex in 61 human subjects. The tuning response of each fMRI voxel was characterized as Gaussian, with independent center frequency and bandwidth parameters. We identified three distinct tonotopic maps, corresponding to primary (A1), rostral (R), and rostral-temporal (RT) regions of auditory cortex. We initially hypothesized that AP abilities might manifest in sharper tuning in the auditory cortex. However, we observed that AP subjects had larger cortical area, with the increased area primarily devoted to broader frequency tuning. We observed anatomically that A1, R and RT were significantly larger in AP musicians than in non-AP musicians or control subjects, which did not differ significantly from each other. The increased cortical area in AP in areas A1 and R were primarily low frequency and broadly tuned, whereas the distribution of responses in area RT did not differ significantly. We conclude that AP abilities are associated with increased early auditory cortical area devoted to broad-frequency tuning and likely exploit increased ensemble encoding.SIGNIFICANCE STATEMENT Absolute pitch (AP), the ability of some musicians to precisely identify and name musical tones in isolation, is associated with a number of gross morphological changes in the brain, but the fundamental neural mechanisms have not been clear. Our study shows that AP musicians have significantly larger volume in early auditory cortex than non-AP musicians and non-musician controls and that this increased volume is primarily devoted to broad-frequency tuning. We conclude that AP musicians are likely able to exploit increased ensemble representations to encode and identify frequency.
Collapse
|
39
|
Schneider F, Dheerendra P, Balezeau F, Ortiz-Rios M, Kikuchi Y, Petkov CI, Thiele A, Griffiths TD. Auditory figure-ground analysis in rostral belt and parabelt of the macaque monkey. Sci Rep 2018; 8:17948. [PMID: 30560879 PMCID: PMC6298974 DOI: 10.1038/s41598-018-36903-1] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/14/2018] [Indexed: 01/08/2023] Open
Abstract
Segregating the key features of the natural world within crowded visual or sound scenes is a critical aspect of everyday perception. The neurobiological bases for auditory figure-ground segregation are poorly understood. We demonstrate that macaques perceive an acoustic figure-ground stimulus with comparable performance to humans using a neural system that involves high-level auditory cortex, localised to the rostral belt and parabelt.
Collapse
Affiliation(s)
- Felix Schneider
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Pradeep Dheerendra
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom.
| | - Fabien Balezeau
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Michael Ortiz-Rios
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Yukiko Kikuchi
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Christopher I Petkov
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Alexander Thiele
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| | - Timothy D Griffiths
- Institute of Neuroscience, Newcastle University, Framlington Place, Newcastle upon Tyne, NE2 4HH, United Kingdom
| |
Collapse
|
40
|
Norman-Haignere SV, McDermott JH. Neural responses to natural and model-matched stimuli reveal distinct computations in primary and nonprimary auditory cortex. PLoS Biol 2018; 16:e2005127. [PMID: 30507943 PMCID: PMC6292651 DOI: 10.1371/journal.pbio.2005127] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2017] [Revised: 12/13/2018] [Accepted: 11/08/2018] [Indexed: 11/19/2022] Open
Abstract
A central goal of sensory neuroscience is to construct models that can explain neural responses to natural stimuli. As a consequence, sensory models are often tested by comparing neural responses to natural stimuli with model responses to those stimuli. One challenge is that distinct model features are often correlated across natural stimuli, and thus model features can predict neural responses even if they do not in fact drive them. Here, we propose a simple alternative for testing a sensory model: we synthesize a stimulus that yields the same model response as each of a set of natural stimuli, and test whether the natural and "model-matched" stimuli elicit the same neural responses. We used this approach to test whether a common model of auditory cortex-in which spectrogram-like peripheral input is processed by linear spectrotemporal filters-can explain fMRI responses in humans to natural sounds. Prior studies have that shown that this model has good predictive power throughout auditory cortex, but this finding could reflect feature correlations in natural stimuli. We observed that fMRI responses to natural and model-matched stimuli were nearly equivalent in primary auditory cortex (PAC) but that nonprimary regions, including those selective for music or speech, showed highly divergent responses to the two sound sets. This dissociation between primary and nonprimary regions was less clear from model predictions due to the influence of feature correlations across natural stimuli. Our results provide a signature of hierarchical organization in human auditory cortex, and suggest that nonprimary regions compute higher-order stimulus properties that are not well captured by traditional models. Our methodology enables stronger tests of sensory models and could be broadly applied in other domains.
Collapse
Affiliation(s)
- Sam V. Norman-Haignere
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Zuckerman Institute of Mind, Brain and Behavior, Columbia University, New York, New York, United States of America
- Laboratoire des Sytèmes Perceptifs, Département d’Études Cognitives, ENS, PSL University, CNRS, Paris France
| | - Josh H. McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
- Program in Speech and Hearing Biosciences and Technology, Harvard University, Cambridge, Massachusetts, United States of America
- McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, Massachusetts, United States of America
| |
Collapse
|
41
|
Nourski KV, Steinschneider M, Rhone AE, Kovach CK, Kawasaki H, Howard MA. Differential responses to spectrally degraded speech within human auditory cortex: An intracranial electrophysiology study. Hear Res 2018; 371:53-65. [PMID: 30500619 DOI: 10.1016/j.heares.2018.11.009] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/13/2018] [Revised: 11/15/2018] [Accepted: 11/19/2018] [Indexed: 12/28/2022]
Abstract
Understanding cortical processing of spectrally degraded speech in normal-hearing subjects may provide insights into how sound information is processed by cochlear implant (CI) users. This study investigated electrocorticographic (ECoG) responses to noise-vocoded speech and related these responses to behavioral performance in a phonemic identification task. Subjects were neurosurgical patients undergoing chronic invasive monitoring for medically refractory epilepsy. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands). ECoG responses were obtained from Heschl's gyrus (HG) and superior temporal gyrus (STG), and were examined within the high gamma frequency range (70-150 Hz). All subjects performed at chance accuracy with speech degraded to 1 and 2 spectral bands, and at or near ceiling for clear speech. Inter-subject variability was observed in the 3- and 4-band conditions. High gamma responses in posteromedial HG (auditory core cortex) were similar for all vocoded conditions and clear speech. A progressive preference for clear speech emerged in anterolateral segments of HG, regardless of behavioral performance. On the lateral STG, responses to all vocoded stimuli were larger in subjects with better task performance. In contrast, both behavioral and neural responses to clear speech were comparable across subjects regardless of their ability to identify degraded stimuli. Findings highlight differences in representation of spectrally degraded speech across cortical areas and their relationship to perception. The results are in agreement with prior non-invasive results. The data provide insight into the neural mechanisms associated with variability in perception of degraded speech and potentially into sources of such variability in CI users.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA.
| | - Mitchell Steinschneider
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, USA
| | - Ariane E Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | | | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, USA; Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, USA; Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, USA
| |
Collapse
|
42
|
Erb J, Armendariz M, De Martino F, Goebel R, Vanduffel W, Formisano E. Homology and Specificity of Natural Sound-Encoding in Human and Monkey Auditory Cortex. Cereb Cortex 2018; 29:3636-3650. [DOI: 10.1093/cercor/bhy243] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2018] [Revised: 08/08/2018] [Accepted: 09/05/2018] [Indexed: 01/01/2023] Open
Abstract
Abstract
Understanding homologies and differences in auditory cortical processing in human and nonhuman primates is an essential step in elucidating the neurobiology of speech and language. Using fMRI responses to natural sounds, we investigated the representation of multiple acoustic features in auditory cortex of awake macaques and humans. Comparative analyses revealed homologous large-scale topographies not only for frequency but also for temporal and spectral modulations. In both species, posterior regions preferably encoded relatively fast temporal and coarse spectral information, whereas anterior regions encoded slow temporal and fine spectral modulations. Conversely, we observed a striking interspecies difference in cortical sensitivity to temporal modulations: While decoding from macaque auditory cortex was most accurate at fast rates (> 30 Hz), humans had highest sensitivity to ~3 Hz, a relevant rate for speech analysis. These findings suggest that characteristic tuning of human auditory cortex to slow temporal modulations is unique and may have emerged as a critical step in the evolution of speech and language.
Collapse
Affiliation(s)
- Julia Erb
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | | | - Federico De Martino
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Rainer Goebel
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
| | - Wim Vanduffel
- Laboratorium voor Neuro-en Psychofysiologie, KU Leuven, Leuven, Belgium
- MGH Martinos Center, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
- Leuven Brain Institute, Leuven, Belgium
| | - Elia Formisano
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, 6200 MD Maastricht, The Netherlands
- Maastricht Brain Imaging Center (MBIC), MD Maastricht, The Netherlands
- Maastricht Center for Systems Biology (MaCSBio), MD Maastricht, The Netherlands
| |
Collapse
|
43
|
Active Sound Localization Sharpens Spatial Tuning in Human Primary Auditory Cortex. J Neurosci 2018; 38:8574-8587. [PMID: 30126968 DOI: 10.1523/jneurosci.0587-18.2018] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2018] [Revised: 07/09/2018] [Accepted: 07/19/2018] [Indexed: 11/21/2022] Open
Abstract
Spatial hearing sensitivity in humans is dynamic and task-dependent, but the mechanisms in human auditory cortex that enable dynamic sound location encoding remain unclear. Using functional magnetic resonance imaging (fMRI), we assessed how active behavior affects encoding of sound location (azimuth) in primary auditory cortical areas and planum temporale (PT). According to the hierarchical model of auditory processing and cortical functional specialization, PT is implicated in sound location ("where") processing. Yet, our results show that spatial tuning profiles in primary auditory cortical areas (left primary core and right caudo-medial belt) sharpened during a sound localization ("where") task compared with a sound identification ("what") task. In contrast, spatial tuning in PT was sharp but did not vary with task performance. We further applied a population pattern decoder to the measured fMRI activity patterns, which confirmed the task-dependent effects in the left core: sound location estimates from fMRI patterns measured during active sound localization were most accurate. In PT, decoding accuracy was not modulated by task performance. These results indicate that changes of population activity in human primary auditory areas reflect dynamic and task-dependent processing of sound location. As such, our findings suggest that the hierarchical model of auditory processing may need to be revised to include an interaction between primary and functionally specialized areas depending on behavioral requirements.SIGNIFICANCE STATEMENT According to a purely hierarchical view, cortical auditory processing consists of a series of analysis stages from sensory (acoustic) processing in primary auditory cortex to specialized processing in higher-order areas. Posterior-dorsal cortical auditory areas, planum temporale (PT) in humans, are considered to be functionally specialized for spatial processing. However, this model is based mostly on passive listening studies. Our results provide compelling evidence that active behavior (sound localization) sharpens spatial selectivity in primary auditory cortex, whereas spatial tuning in functionally specialized areas (PT) is narrow but task-invariant. These findings suggest that the hierarchical view of cortical functional specialization needs to be extended: our data indicate that active behavior involves feedback projections from higher-order regions to primary auditory cortex.
Collapse
|
44
|
Rauschecker JP. Where did language come from? Precursor mechanisms in nonhuman primates. Curr Opin Behav Sci 2018; 21:195-204. [PMID: 30778394 PMCID: PMC6377164 DOI: 10.1016/j.cobeha.2018.06.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
At first glance, the monkey brain looks like a smaller version of the human brain. Indeed, the anatomical and functional architecture of the cortical auditory system in monkeys is very similar to that of humans, with dual pathways segregated into a ventral and a dorsal processing stream. Yet, monkeys do not speak. Repeated attempts to pin this inability on one particular cause have failed. A closer look at the necessary components of language, according to Darwin, reveals that all of them got a significant boost during evolution from nonhuman to human primates. The vocal-articulatory system, in particular, has developed into the most sophisticated of all human sensorimotor systems with about a dozen effectors that, in combination with each other, result in an auditory communication system like no other. This sensorimotor network possesses all the ingredients of an internal model system that permits the emergence of sequence processing, as required for phonology and syntax in modern languages.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University, Washington, DC 20057, USA
| |
Collapse
|
45
|
Sound Frequency Representation in the Auditory Cortex of the Common Marmoset Visualized Using Optical Intrinsic Signal Imaging. eNeuro 2018; 5:eN-NWR-0078-18. [PMID: 29736410 PMCID: PMC5937112 DOI: 10.1523/eneuro.0078-18.2018] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2018] [Revised: 03/27/2018] [Accepted: 03/29/2018] [Indexed: 11/21/2022] Open
Abstract
Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5-16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions.
Collapse
|
46
|
Nettekoven C, Reck N, Goldbrunner R, Grefkes C, Weiß Lucas C. Short- and long-term reliability of language fMRI. Neuroimage 2018; 176:215-225. [PMID: 29704615 DOI: 10.1016/j.neuroimage.2018.04.050] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 03/23/2018] [Accepted: 04/22/2018] [Indexed: 12/22/2022] Open
Abstract
When using functional magnetic resonance imaging (fMRI) for mapping important language functions, a high test-retest reliability is mandatory, both in basic scientific research and for clinical applications. We, therefore, systematically tested the short- and long-term reliability of fMRI in a group of healthy subjects using a picture naming task and a sparse-sampling fMRI protocol. We hypothesized that test-retest reliability might be higher for (i) speech-related motor areas than for other language areas and for (ii) the short as compared to the long intersession interval. 16 right-handed subjects (mean age: 29 years) participated in three sessions separated by 2-6 (session 1 and 2, short-term) and 21-34 days (session 1 and 3, long-term). Subjects were asked to perform the same overt picture naming task in each fMRI session (50 black-white images per session). Reliability was tested using the following measures: (i) Euclidean distances (ED) between local activation maxima and Centers of Gravity (CoGs), (ii) overlap volumes and (iii) voxel-wise intraclass correlation coefficients (ICCs). Analyses were performed for three regions of interest which were chosen based on whole-brain group data: primary motor cortex (M1), superior temporal gyrus (STG) and inferior frontal gyrus (IFG). Our results revealed that the activation centers were highly reliable, independent of the time interval, ROI or hemisphere with significantly smaller ED for the local activation maxima (6.45 ± 1.36 mm) as compared to the CoGs (8.03 ± 2.01 mm). In contrast, the extent of activation revealed rather low reliability values with overlaps ranging from 24% (IFG) to 56% (STG). Here, the left hemisphere showed significantly higher overlap volumes than the right hemisphere. Although mean ICCs ranged between poor (ICC<0.5) and moderate (ICC 0.5-0.74) reliability, highly reliable voxels (ICC>0.75) were found for all ROIs. Voxel-wise reliability of the different ROIs was influenced by the intersession interval. Taken together, we could show that, despite of considerable ROI-dependent variations of the extent of activation over time, highly reliable centers of activation can be identified using an overt picture naming paradigm.
Collapse
Affiliation(s)
- Charlotte Nettekoven
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany; Department of Neurology, Cologne University Hospital, 50924, Cologne, Germany
| | - Nicola Reck
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany
| | - Roland Goldbrunner
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany
| | - Christian Grefkes
- Department of Neurology, Cologne University Hospital, 50924, Cologne, Germany; Institute of Neuroscience and Medicine (INM-3), Juelich Research Centre, 52428, Juelich, Germany
| | - Carolin Weiß Lucas
- Center of Neurosurgery, Cologne University Hospital, 50924, Cologne, Germany.
| |
Collapse
|
47
|
Ozker M, Yoshor D, Beauchamp MS. Converging Evidence From Electrocorticography and BOLD fMRI for a Sharp Functional Boundary in Superior Temporal Gyrus Related to Multisensory Speech Processing. Front Hum Neurosci 2018; 12:141. [PMID: 29740294 PMCID: PMC5928751 DOI: 10.3389/fnhum.2018.00141] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Accepted: 03/28/2018] [Indexed: 01/15/2023] Open
Abstract
Although humans can understand speech using the auditory modality alone, in noisy environments visual speech information from the talker’s mouth can rescue otherwise unintelligible auditory speech. To investigate the neural substrates of multisensory speech perception, we compared neural activity from the human superior temporal gyrus (STG) in two datasets. One dataset consisted of direct neural recordings (electrocorticography, ECoG) from surface electrodes implanted in epilepsy patients (this dataset has been previously published). The second dataset consisted of indirect measures of neural activity using blood oxygen level dependent functional magnetic resonance imaging (BOLD fMRI). Both ECoG and fMRI participants viewed the same clear and noisy audiovisual speech stimuli and performed the same speech recognition task. Both techniques demonstrated a sharp functional boundary in the STG, spatially coincident with an anatomical boundary defined by the posterior edge of Heschl’s gyrus. Cortex on the anterior side of the boundary responded more strongly to clear audiovisual speech than to noisy audiovisual speech while cortex on the posterior side of the boundary did not. For both ECoG and fMRI measurements, the transition between the functionally distinct regions happened within 10 mm of anterior-to-posterior distance along the STG. We relate this boundary to the multisensory neural code underlying speech perception and propose that it represents an important functional division within the human speech perception network.
Collapse
Affiliation(s)
- Muge Ozker
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Daniel Yoshor
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States.,Michael E. DeBakey Veterans Affairs Medical Center, Houston, TX, United States
| | - Michael S Beauchamp
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
48
|
Gauvin DV, Yoder J, Zimmermann ZJ, Tapp R. Ototoxicity: The Radical Drum Beat and Rhythm of Cochlear Hair Cell Life and Death. Int J Toxicol 2018; 37:195-206. [PMID: 29575954 DOI: 10.1177/1091581818761128] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
The function and structure of the auditory information processing system establishes a unique sensory environment for the "perfect storm." The battle between life and death pits the cascade of an apoptotic storm, programmed cell death cascades, against simple cell death (necrosis) pathways. Live or die, the free radical biology of oxygen and hydroxylation, and the destruction of transition metal migration through the mechanical gate sensory processes of the hair cell lead to direct access to the cytoplasm, cytoplasmic reticulum, and mitochondria of the inner workings of the hair cells. These lead to subsequent interactions with nuclear DNA resulting in permanent hearing loss. The yin and yang of pharmaceutical product development is to document what kills, why it kills, and how do we mitigate it. This review highlights the processes of cell death within the cochlea.
Collapse
Affiliation(s)
- David V Gauvin
- 1 Neurobehavioral Sciences Department, MPI Research, Inc., Mattawan, MI, USA
| | - Joshua Yoder
- 1 Neurobehavioral Sciences Department, MPI Research, Inc., Mattawan, MI, USA
| | | | - Rachel Tapp
- 1 Neurobehavioral Sciences Department, MPI Research, Inc., Mattawan, MI, USA
| |
Collapse
|
49
|
Flavoprotein fluorescence imaging-based electrode implantation for subfield-targeted chronic recording in the mouse auditory cortex. J Neurosci Methods 2018; 293:77-85. [PMID: 28851513 DOI: 10.1016/j.jneumeth.2017.08.028] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2017] [Revised: 08/21/2017] [Accepted: 08/22/2017] [Indexed: 11/21/2022]
Abstract
BACKGROUND Chronic neural recording in freely moving animals is important for understanding neural activities of cortical neurons associated with various behavioral contexts. In small animals such as mice, it has been difficult to implant recording electrodes into exact locations according to stereotactic coordinates, skull geometry, or the shape of blood vessels. The main reason for this difficulty is large individual differences in the exact location of the targeted brain area. NEW METHODS We propose a new electrode implantation procedure that is combined with transcranial flavoprotein fluorescence imaging. We demonstrate the effectiveness of this method in the auditory cortex (AC) of mice. RESULTS Prior to electrode implantation, we executed transcranial flavoprotein fluorescence imaging in anesthetized mice and identified the exact location of AC subfields through the skull in each animal. Next, we surgically implanted a microdrive with a tungsten electrode into exactly the identified location. Finally, we recorded neural activity in freely moving conditions and evaluated the success rate of recording auditory responses. COMPARISON WITH EXISTING METHOD(S) These procedures dramatically improved the success rate of recording auditory responses from 21.1% without imaging to 100.0% with imaging. We also identified large individual differences in positional relationships between sound-driven response areas and the squamosal suture or blood vessels. CONCLUSIONS Combining chronic electrophysiology with transcranial flavoprotein fluorescence imaging before implantation enables the realization of reliable subfield-targeted neural recording from freely moving small animals.
Collapse
|
50
|
Zhang Y, Mao Z, Feng S, Liu X, Zhang J, Yu X. Monaural-driven Functional Changes within and Beyond the Auditory Cortical Network: Evidence from Long-term Unilateral Hearing Impairment. Neuroscience 2017; 371:296-308. [PMID: 29253520 DOI: 10.1016/j.neuroscience.2017.12.015] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2017] [Revised: 12/06/2017] [Accepted: 12/11/2017] [Indexed: 01/14/2023]
Abstract
Long-term unilateral hearing impairment (UHI) results in changes in hearing and psychoacoustic performance that are likely related to cortical reorganization. However, the underlying functional changes in the brain are not yet fully understood. Here, we studied alterations in inter- and intra-hemispheric resting-state functional connectivity (RSFC) in 38 patients with long-term UHI caused by acoustic neuroma. Resting-state fMRI data from 17 patients with left-sided hearing impairment (LHI), 21 patients with right-sided hearing impairment (RHI) and 21 healthy controls (HCs) were collected. We applied voxel-mirrored homotopic connectivity analysis to investigate the interhemispheric interactions. To study alterations in between-network interactions, we used four cytoarchitectonically identified subregions in the auditory cortex as "seeds" for whole-brain RSFC analysis. We found that long-term imbalanced auditory input to the brain resulted in (1) enhanced interhemispheric RSFC between the contralateral and ipsilateral auditory networks and (2) differential patterns of altered RSFCs with other sensory (visual and somatomotor) and higher-order (default mode and ventral attention) networks among the four auditory cortical subregions. These altered RSFCs within and beyond the auditory network were dependent on the side of hearing impairment. The results were reproducible when the analysis was restricted to patients with severe-to-profound UHI and patients with hearing-impairment durations greater than 24 months. Together, we demonstrated that long-term UHI drove cortical functional changes within and beyond the auditory network, providing empirical evidence for the association between brain changes and hearing disorders.
Collapse
Affiliation(s)
- Yanyang Zhang
- Department of Neurosurgery, PLA General Hospital, Beijing 100853, China
| | - Zhiqi Mao
- Department of Neurosurgery, PLA General Hospital, Beijing 100853, China
| | - Shiyu Feng
- Department of Neurosurgery, PLA General Hospital, Beijing 100853, China
| | - Xinyun Liu
- Department of Radiology, PLA General Hospital, Beijing 100853, China
| | - Jun Zhang
- Department of Neurosurgery, PLA General Hospital, Beijing 100853, China
| | - Xinguang Yu
- Department of Neurosurgery, PLA General Hospital, Beijing 100853, China.
| |
Collapse
|